Test Report: KVM_Linux_crio 18051

                    
                      a7ac499a82d5d3e781da4a49d780db6ba850b120:2024-01-31:32910
                    
                

Test fail (30/304)

Order failed test Duration
39 TestAddons/parallel/Ingress 156.34
53 TestAddons/StoppedEnableDisable 154.09
81 TestFunctional/serial/CacheCmd/cache/add_local 1.5
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.86
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.25
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 170.96
224 TestMultiNode/serial/RestartKeepsNodes 687.08
226 TestMultiNode/serial/StopMultiNode 142.32
233 TestPreload 218.55
276 TestPause/serial/SecondStartNoReconfiguration 79.4
332 TestStartStop/group/no-preload/serial/Stop 139.11
336 TestStartStop/group/old-k8s-version/serial/Stop 138.8
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.42
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.41
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
357 TestStartStop/group/embed-certs/serial/Stop 138.85
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.4
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.38
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.32
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.34
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 278.7
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 231.88
367 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 161.91
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 288.89
x
+
TestAddons/parallel/Ingress (156.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-165032 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-165032 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-165032 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [312711e8-169b-4200-9f42-4d5db594ed06] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [312711e8-169b-4200-9f42-4d5db594ed06] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.188085186s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-165032 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.153463845s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-165032 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.232
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-165032 addons disable ingress-dns --alsologtostderr -v=1: (1.732295102s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-165032 addons disable ingress --alsologtostderr -v=1: (7.938474776s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-165032 -n addons-165032
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-165032 logs -n 25: (1.30452288s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-319090                                                                     | download-only-319090 | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC | 31 Jan 24 02:05 UTC |
	| delete  | -p download-only-854494                                                                     | download-only-854494 | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC | 31 Jan 24 02:05 UTC |
	| delete  | -p download-only-407605                                                                     | download-only-407605 | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC | 31 Jan 24 02:05 UTC |
	| delete  | -p download-only-319090                                                                     | download-only-319090 | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC | 31 Jan 24 02:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-352988 | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC |                     |
	|         | binary-mirror-352988                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43041                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-352988                                                                     | binary-mirror-352988 | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC | 31 Jan 24 02:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC |                     |
	|         | addons-165032                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC |                     |
	|         | addons-165032                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-165032 --wait=true                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:05 UTC | 31 Jan 24 02:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:08 UTC | 31 Jan 24 02:08 UTC |
	|         | -p addons-165032                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-165032 ssh cat                                                                       | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:08 UTC | 31 Jan 24 02:08 UTC |
	|         | /opt/local-path-provisioner/pvc-acc797b0-8a0d-4af3-bfa4-607db152ba6b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-165032 addons disable                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:08 UTC | 31 Jan 24 02:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-165032 ip                                                                            | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:08 UTC | 31 Jan 24 02:08 UTC |
	| addons  | addons-165032 addons disable                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:08 UTC | 31 Jan 24 02:08 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-165032 addons                                                                        | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:08 UTC | 31 Jan 24 02:08 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC | 31 Jan 24 02:09 UTC |
	|         | addons-165032                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC | 31 Jan 24 02:09 UTC |
	|         | addons-165032                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-165032 ssh curl -s                                                                   | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-165032 addons disable                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC | 31 Jan 24 02:09 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC | 31 Jan 24 02:09 UTC |
	|         | -p addons-165032                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-165032 addons                                                                        | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC | 31 Jan 24 02:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-165032 addons                                                                        | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:09 UTC | 31 Jan 24 02:09 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-165032 ip                                                                            | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:11 UTC | 31 Jan 24 02:11 UTC |
	| addons  | addons-165032 addons disable                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:11 UTC | 31 Jan 24 02:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-165032 addons disable                                                                | addons-165032        | jenkins | v1.32.0 | 31 Jan 24 02:11 UTC | 31 Jan 24 02:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:05:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:05:06.121827 1420792 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:05:06.121994 1420792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:05:06.122005 1420792 out.go:309] Setting ErrFile to fd 2...
	I0131 02:05:06.122010 1420792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:05:06.122244 1420792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:05:06.122942 1420792 out.go:303] Setting JSON to false
	I0131 02:05:06.123889 1420792 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":24449,"bootTime":1706642257,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:05:06.123953 1420792 start.go:138] virtualization: kvm guest
	I0131 02:05:06.126234 1420792 out.go:177] * [addons-165032] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:05:06.128122 1420792 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 02:05:06.128112 1420792 notify.go:220] Checking for updates...
	I0131 02:05:06.129633 1420792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:05:06.131039 1420792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:05:06.132284 1420792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:05:06.133701 1420792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 02:05:06.134976 1420792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 02:05:06.136397 1420792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:05:06.169003 1420792 out.go:177] * Using the kvm2 driver based on user configuration
	I0131 02:05:06.170366 1420792 start.go:298] selected driver: kvm2
	I0131 02:05:06.170382 1420792 start.go:902] validating driver "kvm2" against <nil>
	I0131 02:05:06.170409 1420792 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 02:05:06.171140 1420792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:05:06.171250 1420792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:05:06.186637 1420792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:05:06.186717 1420792 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 02:05:06.187003 1420792 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 02:05:06.187098 1420792 cni.go:84] Creating CNI manager for ""
	I0131 02:05:06.187126 1420792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:05:06.187147 1420792 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0131 02:05:06.187159 1420792 start_flags.go:321] config:
	{Name:addons-165032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-165032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:05:06.187358 1420792 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:05:06.189346 1420792 out.go:177] * Starting control plane node addons-165032 in cluster addons-165032
	I0131 02:05:06.190896 1420792 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:05:06.190942 1420792 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 02:05:06.190953 1420792 cache.go:56] Caching tarball of preloaded images
	I0131 02:05:06.191051 1420792 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 02:05:06.191065 1420792 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 02:05:06.191378 1420792 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/config.json ...
	I0131 02:05:06.191404 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/config.json: {Name:mk87cda406cccadc6c72c41f1121e117c7977eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:06.191553 1420792 start.go:365] acquiring machines lock for addons-165032: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:05:06.191643 1420792 start.go:369] acquired machines lock for "addons-165032" in 76.259µs
	I0131 02:05:06.191661 1420792 start.go:93] Provisioning new machine with config: &{Name:addons-165032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-165032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 02:05:06.191721 1420792 start.go:125] createHost starting for "" (driver="kvm2")
	I0131 02:05:06.193549 1420792 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0131 02:05:06.193706 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:05:06.193754 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:05:06.208538 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0131 02:05:06.209029 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:05:06.209579 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:05:06.209603 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:05:06.210062 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:05:06.210272 1420792 main.go:141] libmachine: (addons-165032) Calling .GetMachineName
	I0131 02:05:06.210497 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:06.210682 1420792 start.go:159] libmachine.API.Create for "addons-165032" (driver="kvm2")
	I0131 02:05:06.210738 1420792 client.go:168] LocalClient.Create starting
	I0131 02:05:06.210785 1420792 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem
	I0131 02:05:06.381654 1420792 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem
	I0131 02:05:06.669958 1420792 main.go:141] libmachine: Running pre-create checks...
	I0131 02:05:06.669991 1420792 main.go:141] libmachine: (addons-165032) Calling .PreCreateCheck
	I0131 02:05:06.670637 1420792 main.go:141] libmachine: (addons-165032) Calling .GetConfigRaw
	I0131 02:05:06.671138 1420792 main.go:141] libmachine: Creating machine...
	I0131 02:05:06.671156 1420792 main.go:141] libmachine: (addons-165032) Calling .Create
	I0131 02:05:06.671343 1420792 main.go:141] libmachine: (addons-165032) Creating KVM machine...
	I0131 02:05:06.672552 1420792 main.go:141] libmachine: (addons-165032) DBG | found existing default KVM network
	I0131 02:05:06.673517 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:06.673343 1420824 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147a60}
	I0131 02:05:06.678987 1420792 main.go:141] libmachine: (addons-165032) DBG | trying to create private KVM network mk-addons-165032 192.168.39.0/24...
	I0131 02:05:06.755602 1420792 main.go:141] libmachine: (addons-165032) DBG | private KVM network mk-addons-165032 192.168.39.0/24 created
	I0131 02:05:06.755642 1420792 main.go:141] libmachine: (addons-165032) Setting up store path in /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032 ...
	I0131 02:05:06.755658 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:06.755553 1420824 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:05:06.755677 1420792 main.go:141] libmachine: (addons-165032) Building disk image from file:///home/jenkins/minikube-integration/18051-1412717/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0131 02:05:06.755739 1420792 main.go:141] libmachine: (addons-165032) Downloading /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18051-1412717/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0131 02:05:06.996173 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:06.996022 1420824 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa...
	I0131 02:05:07.035696 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:07.035540 1420824 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/addons-165032.rawdisk...
	I0131 02:05:07.035760 1420792 main.go:141] libmachine: (addons-165032) DBG | Writing magic tar header
	I0131 02:05:07.035773 1420792 main.go:141] libmachine: (addons-165032) DBG | Writing SSH key tar header
	I0131 02:05:07.035781 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:07.035706 1420824 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032 ...
	I0131 02:05:07.035808 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032
	I0131 02:05:07.035831 1420792 main.go:141] libmachine: (addons-165032) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032 (perms=drwx------)
	I0131 02:05:07.035849 1420792 main.go:141] libmachine: (addons-165032) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717/.minikube/machines (perms=drwxr-xr-x)
	I0131 02:05:07.035865 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines
	I0131 02:05:07.035879 1420792 main.go:141] libmachine: (addons-165032) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717/.minikube (perms=drwxr-xr-x)
	I0131 02:05:07.035900 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:05:07.035915 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717
	I0131 02:05:07.035932 1420792 main.go:141] libmachine: (addons-165032) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717 (perms=drwxrwxr-x)
	I0131 02:05:07.035940 1420792 main.go:141] libmachine: (addons-165032) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0131 02:05:07.035953 1420792 main.go:141] libmachine: (addons-165032) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0131 02:05:07.035966 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0131 02:05:07.035978 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home/jenkins
	I0131 02:05:07.035988 1420792 main.go:141] libmachine: (addons-165032) DBG | Checking permissions on dir: /home
	I0131 02:05:07.036000 1420792 main.go:141] libmachine: (addons-165032) DBG | Skipping /home - not owner
	I0131 02:05:07.036010 1420792 main.go:141] libmachine: (addons-165032) Creating domain...
	I0131 02:05:07.037348 1420792 main.go:141] libmachine: (addons-165032) define libvirt domain using xml: 
	I0131 02:05:07.037382 1420792 main.go:141] libmachine: (addons-165032) <domain type='kvm'>
	I0131 02:05:07.037393 1420792 main.go:141] libmachine: (addons-165032)   <name>addons-165032</name>
	I0131 02:05:07.037406 1420792 main.go:141] libmachine: (addons-165032)   <memory unit='MiB'>4000</memory>
	I0131 02:05:07.037416 1420792 main.go:141] libmachine: (addons-165032)   <vcpu>2</vcpu>
	I0131 02:05:07.037426 1420792 main.go:141] libmachine: (addons-165032)   <features>
	I0131 02:05:07.037436 1420792 main.go:141] libmachine: (addons-165032)     <acpi/>
	I0131 02:05:07.037449 1420792 main.go:141] libmachine: (addons-165032)     <apic/>
	I0131 02:05:07.037460 1420792 main.go:141] libmachine: (addons-165032)     <pae/>
	I0131 02:05:07.037472 1420792 main.go:141] libmachine: (addons-165032)     
	I0131 02:05:07.037483 1420792 main.go:141] libmachine: (addons-165032)   </features>
	I0131 02:05:07.037496 1420792 main.go:141] libmachine: (addons-165032)   <cpu mode='host-passthrough'>
	I0131 02:05:07.037507 1420792 main.go:141] libmachine: (addons-165032)   
	I0131 02:05:07.037518 1420792 main.go:141] libmachine: (addons-165032)   </cpu>
	I0131 02:05:07.037576 1420792 main.go:141] libmachine: (addons-165032)   <os>
	I0131 02:05:07.037633 1420792 main.go:141] libmachine: (addons-165032)     <type>hvm</type>
	I0131 02:05:07.037645 1420792 main.go:141] libmachine: (addons-165032)     <boot dev='cdrom'/>
	I0131 02:05:07.037655 1420792 main.go:141] libmachine: (addons-165032)     <boot dev='hd'/>
	I0131 02:05:07.037670 1420792 main.go:141] libmachine: (addons-165032)     <bootmenu enable='no'/>
	I0131 02:05:07.037685 1420792 main.go:141] libmachine: (addons-165032)   </os>
	I0131 02:05:07.037699 1420792 main.go:141] libmachine: (addons-165032)   <devices>
	I0131 02:05:07.037713 1420792 main.go:141] libmachine: (addons-165032)     <disk type='file' device='cdrom'>
	I0131 02:05:07.037742 1420792 main.go:141] libmachine: (addons-165032)       <source file='/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/boot2docker.iso'/>
	I0131 02:05:07.037767 1420792 main.go:141] libmachine: (addons-165032)       <target dev='hdc' bus='scsi'/>
	I0131 02:05:07.037777 1420792 main.go:141] libmachine: (addons-165032)       <readonly/>
	I0131 02:05:07.037785 1420792 main.go:141] libmachine: (addons-165032)     </disk>
	I0131 02:05:07.037792 1420792 main.go:141] libmachine: (addons-165032)     <disk type='file' device='disk'>
	I0131 02:05:07.037805 1420792 main.go:141] libmachine: (addons-165032)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0131 02:05:07.037828 1420792 main.go:141] libmachine: (addons-165032)       <source file='/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/addons-165032.rawdisk'/>
	I0131 02:05:07.037848 1420792 main.go:141] libmachine: (addons-165032)       <target dev='hda' bus='virtio'/>
	I0131 02:05:07.037861 1420792 main.go:141] libmachine: (addons-165032)     </disk>
	I0131 02:05:07.037874 1420792 main.go:141] libmachine: (addons-165032)     <interface type='network'>
	I0131 02:05:07.037889 1420792 main.go:141] libmachine: (addons-165032)       <source network='mk-addons-165032'/>
	I0131 02:05:07.037902 1420792 main.go:141] libmachine: (addons-165032)       <model type='virtio'/>
	I0131 02:05:07.037916 1420792 main.go:141] libmachine: (addons-165032)     </interface>
	I0131 02:05:07.037933 1420792 main.go:141] libmachine: (addons-165032)     <interface type='network'>
	I0131 02:05:07.037947 1420792 main.go:141] libmachine: (addons-165032)       <source network='default'/>
	I0131 02:05:07.037959 1420792 main.go:141] libmachine: (addons-165032)       <model type='virtio'/>
	I0131 02:05:07.037970 1420792 main.go:141] libmachine: (addons-165032)     </interface>
	I0131 02:05:07.037983 1420792 main.go:141] libmachine: (addons-165032)     <serial type='pty'>
	I0131 02:05:07.038010 1420792 main.go:141] libmachine: (addons-165032)       <target port='0'/>
	I0131 02:05:07.038030 1420792 main.go:141] libmachine: (addons-165032)     </serial>
	I0131 02:05:07.038044 1420792 main.go:141] libmachine: (addons-165032)     <console type='pty'>
	I0131 02:05:07.038057 1420792 main.go:141] libmachine: (addons-165032)       <target type='serial' port='0'/>
	I0131 02:05:07.038070 1420792 main.go:141] libmachine: (addons-165032)     </console>
	I0131 02:05:07.038081 1420792 main.go:141] libmachine: (addons-165032)     <rng model='virtio'>
	I0131 02:05:07.038098 1420792 main.go:141] libmachine: (addons-165032)       <backend model='random'>/dev/random</backend>
	I0131 02:05:07.038114 1420792 main.go:141] libmachine: (addons-165032)     </rng>
	I0131 02:05:07.038127 1420792 main.go:141] libmachine: (addons-165032)     
	I0131 02:05:07.038138 1420792 main.go:141] libmachine: (addons-165032)     
	I0131 02:05:07.038148 1420792 main.go:141] libmachine: (addons-165032)   </devices>
	I0131 02:05:07.038159 1420792 main.go:141] libmachine: (addons-165032) </domain>
	I0131 02:05:07.038174 1420792 main.go:141] libmachine: (addons-165032) 
	I0131 02:05:07.042729 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:ef:eb:fc in network default
	I0131 02:05:07.043419 1420792 main.go:141] libmachine: (addons-165032) Ensuring networks are active...
	I0131 02:05:07.043454 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:07.044200 1420792 main.go:141] libmachine: (addons-165032) Ensuring network default is active
	I0131 02:05:07.044517 1420792 main.go:141] libmachine: (addons-165032) Ensuring network mk-addons-165032 is active
	I0131 02:05:07.044974 1420792 main.go:141] libmachine: (addons-165032) Getting domain xml...
	I0131 02:05:07.045840 1420792 main.go:141] libmachine: (addons-165032) Creating domain...
	I0131 02:05:08.237320 1420792 main.go:141] libmachine: (addons-165032) Waiting to get IP...
	I0131 02:05:08.238061 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:08.238470 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:08.238508 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:08.238454 1420824 retry.go:31] will retry after 258.736437ms: waiting for machine to come up
	I0131 02:05:08.498959 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:08.499505 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:08.499530 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:08.499435 1420824 retry.go:31] will retry after 330.545475ms: waiting for machine to come up
	I0131 02:05:08.832042 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:08.832419 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:08.832450 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:08.832371 1420824 retry.go:31] will retry after 372.402551ms: waiting for machine to come up
	I0131 02:05:09.205863 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:09.206257 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:09.206281 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:09.206203 1420824 retry.go:31] will retry after 372.76661ms: waiting for machine to come up
	I0131 02:05:09.580789 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:09.581246 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:09.581273 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:09.581191 1420824 retry.go:31] will retry after 705.788337ms: waiting for machine to come up
	I0131 02:05:10.288141 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:10.288548 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:10.288593 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:10.288488 1420824 retry.go:31] will retry after 914.621763ms: waiting for machine to come up
	I0131 02:05:11.204537 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:11.205053 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:11.205081 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:11.204974 1420824 retry.go:31] will retry after 813.186586ms: waiting for machine to come up
	I0131 02:05:12.019710 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:12.020222 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:12.020253 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:12.020181 1420824 retry.go:31] will retry after 1.339602997s: waiting for machine to come up
	I0131 02:05:13.361185 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:13.361630 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:13.361659 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:13.361581 1420824 retry.go:31] will retry after 1.467791244s: waiting for machine to come up
	I0131 02:05:14.831309 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:14.831705 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:14.831743 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:14.831651 1420824 retry.go:31] will retry after 1.625147641s: waiting for machine to come up
	I0131 02:05:16.458497 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:16.458860 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:16.458890 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:16.458828 1420824 retry.go:31] will retry after 2.556378745s: waiting for machine to come up
	I0131 02:05:19.017414 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:19.017891 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:19.017923 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:19.017834 1420824 retry.go:31] will retry after 3.547792306s: waiting for machine to come up
	I0131 02:05:22.567992 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:22.568364 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:22.568393 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:22.568334 1420824 retry.go:31] will retry after 3.278776765s: waiting for machine to come up
	I0131 02:05:25.851073 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:25.851502 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find current IP address of domain addons-165032 in network mk-addons-165032
	I0131 02:05:25.851531 1420792 main.go:141] libmachine: (addons-165032) DBG | I0131 02:05:25.851451 1420824 retry.go:31] will retry after 3.910201105s: waiting for machine to come up
	I0131 02:05:29.766456 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:29.766869 1420792 main.go:141] libmachine: (addons-165032) Found IP for machine: 192.168.39.232
	I0131 02:05:29.766905 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has current primary IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:29.766913 1420792 main.go:141] libmachine: (addons-165032) Reserving static IP address...
	I0131 02:05:29.767421 1420792 main.go:141] libmachine: (addons-165032) DBG | unable to find host DHCP lease matching {name: "addons-165032", mac: "52:54:00:9e:80:87", ip: "192.168.39.232"} in network mk-addons-165032
	I0131 02:05:29.845149 1420792 main.go:141] libmachine: (addons-165032) DBG | Getting to WaitForSSH function...
	I0131 02:05:29.845187 1420792 main.go:141] libmachine: (addons-165032) Reserved static IP address: 192.168.39.232
	I0131 02:05:29.845200 1420792 main.go:141] libmachine: (addons-165032) Waiting for SSH to be available...
	I0131 02:05:29.847783 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:29.848210 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:29.848235 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:29.848399 1420792 main.go:141] libmachine: (addons-165032) DBG | Using SSH client type: external
	I0131 02:05:29.848430 1420792 main.go:141] libmachine: (addons-165032) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa (-rw-------)
	I0131 02:05:29.848490 1420792 main.go:141] libmachine: (addons-165032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 02:05:29.848509 1420792 main.go:141] libmachine: (addons-165032) DBG | About to run SSH command:
	I0131 02:05:29.848519 1420792 main.go:141] libmachine: (addons-165032) DBG | exit 0
	I0131 02:05:29.934579 1420792 main.go:141] libmachine: (addons-165032) DBG | SSH cmd err, output: <nil>: 
	I0131 02:05:29.934824 1420792 main.go:141] libmachine: (addons-165032) KVM machine creation complete!
	I0131 02:05:29.935234 1420792 main.go:141] libmachine: (addons-165032) Calling .GetConfigRaw
	I0131 02:05:29.935858 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:29.936021 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:29.936161 1420792 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0131 02:05:29.936181 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:05:29.937723 1420792 main.go:141] libmachine: Detecting operating system of created instance...
	I0131 02:05:29.937740 1420792 main.go:141] libmachine: Waiting for SSH to be available...
	I0131 02:05:29.937747 1420792 main.go:141] libmachine: Getting to WaitForSSH function...
	I0131 02:05:29.937753 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:29.940417 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:29.940788 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:29.940828 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:29.940978 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:29.941182 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:29.941358 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:29.941529 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:29.941765 1420792 main.go:141] libmachine: Using SSH client type: native
	I0131 02:05:29.942176 1420792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 02:05:29.942190 1420792 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0131 02:05:30.053695 1420792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:05:30.053725 1420792 main.go:141] libmachine: Detecting the provisioner...
	I0131 02:05:30.053734 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.056852 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.057339 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.057370 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.057537 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:30.057812 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.058018 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.058216 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:30.058417 1420792 main.go:141] libmachine: Using SSH client type: native
	I0131 02:05:30.058797 1420792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 02:05:30.058812 1420792 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0131 02:05:30.170827 1420792 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0131 02:05:30.170897 1420792 main.go:141] libmachine: found compatible host: buildroot
	I0131 02:05:30.170909 1420792 main.go:141] libmachine: Provisioning with buildroot...
	I0131 02:05:30.170921 1420792 main.go:141] libmachine: (addons-165032) Calling .GetMachineName
	I0131 02:05:30.171235 1420792 buildroot.go:166] provisioning hostname "addons-165032"
	I0131 02:05:30.171264 1420792 main.go:141] libmachine: (addons-165032) Calling .GetMachineName
	I0131 02:05:30.171471 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.174246 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.174574 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.174604 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.174764 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:30.174987 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.175132 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.175290 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:30.175425 1420792 main.go:141] libmachine: Using SSH client type: native
	I0131 02:05:30.175789 1420792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 02:05:30.175808 1420792 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-165032 && echo "addons-165032" | sudo tee /etc/hostname
	I0131 02:05:30.298867 1420792 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-165032
	
	I0131 02:05:30.298910 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.301896 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.302233 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.302262 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.302439 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:30.302661 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.302902 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.303004 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:30.303143 1420792 main.go:141] libmachine: Using SSH client type: native
	I0131 02:05:30.303454 1420792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 02:05:30.303472 1420792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-165032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-165032/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-165032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 02:05:30.422321 1420792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:05:30.422352 1420792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 02:05:30.422375 1420792 buildroot.go:174] setting up certificates
	I0131 02:05:30.422386 1420792 provision.go:83] configureAuth start
	I0131 02:05:30.422395 1420792 main.go:141] libmachine: (addons-165032) Calling .GetMachineName
	I0131 02:05:30.422720 1420792 main.go:141] libmachine: (addons-165032) Calling .GetIP
	I0131 02:05:30.425377 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.425755 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.425788 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.425955 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.428196 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.428470 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.428493 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.428666 1420792 provision.go:138] copyHostCerts
	I0131 02:05:30.428747 1420792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 02:05:30.428889 1420792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 02:05:30.428972 1420792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 02:05:30.429065 1420792 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.addons-165032 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube addons-165032]
	I0131 02:05:30.502891 1420792 provision.go:172] copyRemoteCerts
	I0131 02:05:30.502954 1420792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 02:05:30.502986 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.505712 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.506061 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.506094 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.506250 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:30.506460 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.506700 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:30.506871 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:05:30.591212 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 02:05:30.613417 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0131 02:05:30.634210 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 02:05:30.654785 1420792 provision.go:86] duration metric: configureAuth took 232.378745ms
	I0131 02:05:30.654816 1420792 buildroot.go:189] setting minikube options for container-runtime
	I0131 02:05:30.655021 1420792 config.go:182] Loaded profile config "addons-165032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:05:30.655121 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.657821 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.658175 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.658223 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.658378 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:30.658620 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.658782 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.658928 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:30.659109 1420792 main.go:141] libmachine: Using SSH client type: native
	I0131 02:05:30.659483 1420792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 02:05:30.659502 1420792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 02:05:30.955968 1420792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 02:05:30.955996 1420792 main.go:141] libmachine: Checking connection to Docker...
	I0131 02:05:30.956011 1420792 main.go:141] libmachine: (addons-165032) Calling .GetURL
	I0131 02:05:30.957282 1420792 main.go:141] libmachine: (addons-165032) DBG | Using libvirt version 6000000
	I0131 02:05:30.959896 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.960294 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.960323 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.960603 1420792 main.go:141] libmachine: Docker is up and running!
	I0131 02:05:30.960623 1420792 main.go:141] libmachine: Reticulating splines...
	I0131 02:05:30.960632 1420792 client.go:171] LocalClient.Create took 24.74988217s
	I0131 02:05:30.960658 1420792 start.go:167] duration metric: libmachine.API.Create for "addons-165032" took 24.749979105s
	I0131 02:05:30.960680 1420792 start.go:300] post-start starting for "addons-165032" (driver="kvm2")
	I0131 02:05:30.960693 1420792 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 02:05:30.960719 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:30.960975 1420792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 02:05:30.961001 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:30.963142 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.963487 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:30.963524 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:30.963635 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:30.963846 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:30.964008 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:30.964155 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:05:31.048086 1420792 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 02:05:31.052089 1420792 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 02:05:31.052114 1420792 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 02:05:31.052185 1420792 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 02:05:31.052207 1420792 start.go:303] post-start completed in 91.519453ms
	I0131 02:05:31.052246 1420792 main.go:141] libmachine: (addons-165032) Calling .GetConfigRaw
	I0131 02:05:31.052865 1420792 main.go:141] libmachine: (addons-165032) Calling .GetIP
	I0131 02:05:31.055666 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.056017 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:31.056043 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.056284 1420792 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/config.json ...
	I0131 02:05:31.056455 1420792 start.go:128] duration metric: createHost completed in 24.864723843s
	I0131 02:05:31.056479 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:31.058669 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.058924 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:31.058956 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.059068 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:31.059278 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:31.059453 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:31.059595 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:31.059730 1420792 main.go:141] libmachine: Using SSH client type: native
	I0131 02:05:31.060095 1420792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 02:05:31.060110 1420792 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 02:05:31.170674 1420792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706666731.144019041
	
	I0131 02:05:31.170714 1420792 fix.go:206] guest clock: 1706666731.144019041
	I0131 02:05:31.170726 1420792 fix.go:219] Guest: 2024-01-31 02:05:31.144019041 +0000 UTC Remote: 2024-01-31 02:05:31.05646722 +0000 UTC m=+24.985861602 (delta=87.551821ms)
	I0131 02:05:31.170755 1420792 fix.go:190] guest clock delta is within tolerance: 87.551821ms
	I0131 02:05:31.170763 1420792 start.go:83] releasing machines lock for "addons-165032", held for 24.979109017s
	I0131 02:05:31.170792 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:31.171211 1420792 main.go:141] libmachine: (addons-165032) Calling .GetIP
	I0131 02:05:31.173865 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.174277 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:31.174313 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.174519 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:31.175099 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:31.175298 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:05:31.175400 1420792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 02:05:31.175451 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:31.175601 1420792 ssh_runner.go:195] Run: cat /version.json
	I0131 02:05:31.175631 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:05:31.178230 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.178261 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.178611 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:31.178637 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:31.178662 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.178681 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:31.178837 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:31.178934 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:05:31.179055 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:31.179128 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:05:31.179220 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:31.179297 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:05:31.179379 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:05:31.179455 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:05:31.295798 1420792 ssh_runner.go:195] Run: systemctl --version
	I0131 02:05:31.301268 1420792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 02:05:31.461847 1420792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 02:05:31.467221 1420792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 02:05:31.467315 1420792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 02:05:31.481838 1420792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 02:05:31.481865 1420792 start.go:475] detecting cgroup driver to use...
	I0131 02:05:31.481936 1420792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 02:05:31.494918 1420792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 02:05:31.507570 1420792 docker.go:217] disabling cri-docker service (if available) ...
	I0131 02:05:31.507642 1420792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 02:05:31.520521 1420792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 02:05:31.533034 1420792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 02:05:31.641670 1420792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 02:05:31.747874 1420792 docker.go:233] disabling docker service ...
	I0131 02:05:31.747960 1420792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 02:05:31.760007 1420792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 02:05:31.770727 1420792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 02:05:31.867482 1420792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 02:05:31.963456 1420792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 02:05:31.974734 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 02:05:31.990501 1420792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 02:05:31.990579 1420792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:05:31.999423 1420792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 02:05:31.999487 1420792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:05:32.008149 1420792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:05:32.016776 1420792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:05:32.025025 1420792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 02:05:32.034599 1420792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 02:05:32.041913 1420792 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 02:05:32.041982 1420792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 02:05:32.053884 1420792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 02:05:32.061786 1420792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 02:05:32.160447 1420792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 02:05:32.318417 1420792 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 02:05:32.318556 1420792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 02:05:32.324219 1420792 start.go:543] Will wait 60s for crictl version
	I0131 02:05:32.324310 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:05:32.329809 1420792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 02:05:32.367382 1420792 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 02:05:32.367503 1420792 ssh_runner.go:195] Run: crio --version
	I0131 02:05:32.415550 1420792 ssh_runner.go:195] Run: crio --version
	I0131 02:05:32.460272 1420792 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 02:05:32.461865 1420792 main.go:141] libmachine: (addons-165032) Calling .GetIP
	I0131 02:05:32.464585 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:32.465016 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:05:32.465052 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:05:32.465300 1420792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 02:05:32.469343 1420792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:05:32.480721 1420792 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:05:32.480827 1420792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:05:32.517722 1420792 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 02:05:32.517858 1420792 ssh_runner.go:195] Run: which lz4
	I0131 02:05:32.521897 1420792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 02:05:32.526030 1420792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 02:05:32.526052 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 02:05:34.058752 1420792 crio.go:444] Took 1.536883 seconds to copy over tarball
	I0131 02:05:34.058848 1420792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 02:05:36.976120 1420792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.91722963s)
	I0131 02:05:36.976164 1420792 crio.go:451] Took 2.917354 seconds to extract the tarball
	I0131 02:05:36.976174 1420792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 02:05:37.017247 1420792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:05:37.081165 1420792 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 02:05:37.081191 1420792 cache_images.go:84] Images are preloaded, skipping loading
	I0131 02:05:37.081252 1420792 ssh_runner.go:195] Run: crio config
	I0131 02:05:37.142647 1420792 cni.go:84] Creating CNI manager for ""
	I0131 02:05:37.142678 1420792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:05:37.142706 1420792 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 02:05:37.142734 1420792 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-165032 NodeName:addons-165032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 02:05:37.142891 1420792 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-165032"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 02:05:37.142982 1420792 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-165032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-165032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 02:05:37.143050 1420792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 02:05:37.152312 1420792 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 02:05:37.152408 1420792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 02:05:37.160753 1420792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0131 02:05:37.175584 1420792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 02:05:37.190087 1420792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0131 02:05:37.204997 1420792 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 02:05:37.208556 1420792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:05:37.219632 1420792 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032 for IP: 192.168.39.232
	I0131 02:05:37.219677 1420792 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.219824 1420792 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 02:05:37.373160 1420792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt ...
	I0131 02:05:37.373196 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt: {Name:mk66e99161b882639e16772dac736f4572f417ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.373366 1420792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key ...
	I0131 02:05:37.373377 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key: {Name:mk8438819e673a23b96b4012f18a3a4d5e5cb7c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.373462 1420792 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 02:05:37.549373 1420792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt ...
	I0131 02:05:37.549408 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt: {Name:mkeb71d3b50e6e49665872e799e9e78b895909df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.549579 1420792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key ...
	I0131 02:05:37.549589 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key: {Name:mk50e86ec0fde7061389f951f00e3202608aadec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.549690 1420792 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.key
	I0131 02:05:37.549704 1420792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt with IP's: []
	I0131 02:05:37.892064 1420792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt ...
	I0131 02:05:37.892114 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: {Name:mk420341c9c843876facb95113a9b202ee671478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.892309 1420792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.key ...
	I0131 02:05:37.892328 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.key: {Name:mkbf49bbfeed436478f5407deaebeb1e735c99d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:37.892429 1420792 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.key.ca7bc7e0
	I0131 02:05:37.892454 1420792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.crt.ca7bc7e0 with IP's: [192.168.39.232 10.96.0.1 127.0.0.1 10.0.0.1]
	I0131 02:05:38.146973 1420792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.crt.ca7bc7e0 ...
	I0131 02:05:38.147012 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.crt.ca7bc7e0: {Name:mk422eace43229d755060bd4780b7255ce3fbc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:38.147204 1420792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.key.ca7bc7e0 ...
	I0131 02:05:38.147223 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.key.ca7bc7e0: {Name:mk0ff4c77e4d17afcfff342f4e03718563332e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:38.147319 1420792 certs.go:337] copying /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.crt.ca7bc7e0 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.crt
	I0131 02:05:38.147406 1420792 certs.go:341] copying /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.key.ca7bc7e0 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.key
	I0131 02:05:38.147475 1420792 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.key
	I0131 02:05:38.147502 1420792 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.crt with IP's: []
	I0131 02:05:38.484630 1420792 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.crt ...
	I0131 02:05:38.484669 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.crt: {Name:mk21421c9552e048489da97c242be758089a26e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:38.484855 1420792 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.key ...
	I0131 02:05:38.484874 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.key: {Name:mk3c34b83b9da7bf2f2ba2325d27a37ad78b3955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:05:38.485081 1420792 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 02:05:38.485135 1420792 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 02:05:38.485171 1420792 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 02:05:38.485213 1420792 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 02:05:38.485910 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 02:05:38.508782 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 02:05:38.531193 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 02:05:38.552057 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 02:05:38.572697 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 02:05:38.594992 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 02:05:38.615546 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 02:05:38.636954 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 02:05:38.660040 1420792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 02:05:38.680397 1420792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 02:05:38.695160 1420792 ssh_runner.go:195] Run: openssl version
	I0131 02:05:38.700247 1420792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 02:05:38.709505 1420792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:05:38.713922 1420792 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:05:38.713988 1420792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:05:38.719123 1420792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 02:05:38.728942 1420792 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 02:05:38.732840 1420792 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 02:05:38.732896 1420792 kubeadm.go:404] StartCluster: {Name:addons-165032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-165032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:05:38.732997 1420792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 02:05:38.733054 1420792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 02:05:38.767499 1420792 cri.go:89] found id: ""
	I0131 02:05:38.767588 1420792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 02:05:38.776616 1420792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 02:05:38.785366 1420792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 02:05:38.795763 1420792 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 02:05:38.795821 1420792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 02:05:38.843800 1420792 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 02:05:38.843871 1420792 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 02:05:38.970075 1420792 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 02:05:38.970226 1420792 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 02:05:38.970326 1420792 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 02:05:39.183508 1420792 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 02:05:39.297323 1420792 out.go:204]   - Generating certificates and keys ...
	I0131 02:05:39.297456 1420792 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 02:05:39.297561 1420792 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 02:05:39.430863 1420792 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0131 02:05:39.602978 1420792 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0131 02:05:39.677357 1420792 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0131 02:05:39.870340 1420792 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0131 02:05:39.974991 1420792 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0131 02:05:39.975131 1420792 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-165032 localhost] and IPs [192.168.39.232 127.0.0.1 ::1]
	I0131 02:05:40.114809 1420792 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0131 02:05:40.114996 1420792 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-165032 localhost] and IPs [192.168.39.232 127.0.0.1 ::1]
	I0131 02:05:40.423146 1420792 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0131 02:05:40.575497 1420792 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0131 02:05:40.666902 1420792 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0131 02:05:40.667051 1420792 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 02:05:41.039266 1420792 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 02:05:41.140932 1420792 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 02:05:41.243113 1420792 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 02:05:41.485673 1420792 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 02:05:41.486217 1420792 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 02:05:41.488403 1420792 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 02:05:41.490452 1420792 out.go:204]   - Booting up control plane ...
	I0131 02:05:41.490608 1420792 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 02:05:41.491436 1420792 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 02:05:41.492278 1420792 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 02:05:41.507632 1420792 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 02:05:41.507751 1420792 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 02:05:41.507789 1420792 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 02:05:41.637207 1420792 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 02:05:49.134732 1420792 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502610 seconds
	I0131 02:05:49.134885 1420792 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 02:05:49.152045 1420792 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 02:05:49.683953 1420792 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 02:05:49.684159 1420792 kubeadm.go:322] [mark-control-plane] Marking the node addons-165032 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 02:05:50.197009 1420792 kubeadm.go:322] [bootstrap-token] Using token: wk337y.q396ehxu8m0knchn
	I0131 02:05:50.198504 1420792 out.go:204]   - Configuring RBAC rules ...
	I0131 02:05:50.198624 1420792 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 02:05:50.204092 1420792 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 02:05:50.214613 1420792 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 02:05:50.221038 1420792 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 02:05:50.226060 1420792 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 02:05:50.231797 1420792 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 02:05:50.250506 1420792 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 02:05:50.513684 1420792 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 02:05:50.612607 1420792 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 02:05:50.612634 1420792 kubeadm.go:322] 
	I0131 02:05:50.612730 1420792 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 02:05:50.612754 1420792 kubeadm.go:322] 
	I0131 02:05:50.612837 1420792 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 02:05:50.612849 1420792 kubeadm.go:322] 
	I0131 02:05:50.612894 1420792 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 02:05:50.612947 1420792 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 02:05:50.613021 1420792 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 02:05:50.613032 1420792 kubeadm.go:322] 
	I0131 02:05:50.613100 1420792 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 02:05:50.613109 1420792 kubeadm.go:322] 
	I0131 02:05:50.613187 1420792 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 02:05:50.613195 1420792 kubeadm.go:322] 
	I0131 02:05:50.613248 1420792 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 02:05:50.613348 1420792 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 02:05:50.613438 1420792 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 02:05:50.613448 1420792 kubeadm.go:322] 
	I0131 02:05:50.613565 1420792 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 02:05:50.613669 1420792 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 02:05:50.613694 1420792 kubeadm.go:322] 
	I0131 02:05:50.613799 1420792 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wk337y.q396ehxu8m0knchn \
	I0131 02:05:50.613938 1420792 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 02:05:50.613959 1420792 kubeadm.go:322] 	--control-plane 
	I0131 02:05:50.613963 1420792 kubeadm.go:322] 
	I0131 02:05:50.614041 1420792 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 02:05:50.614048 1420792 kubeadm.go:322] 
	I0131 02:05:50.614155 1420792 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wk337y.q396ehxu8m0knchn \
	I0131 02:05:50.614281 1420792 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 02:05:50.616426 1420792 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 02:05:50.616563 1420792 cni.go:84] Creating CNI manager for ""
	I0131 02:05:50.616585 1420792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:05:50.618371 1420792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 02:05:50.619722 1420792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 02:05:50.637656 1420792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 02:05:50.673416 1420792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 02:05:50.673563 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=addons-165032 minikube.k8s.io/updated_at=2024_01_31T02_05_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:50.673571 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:50.912170 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:50.950259 1420792 ops.go:34] apiserver oom_adj: -16
	I0131 02:05:51.412257 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:51.913204 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:52.412935 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:52.913079 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:53.412795 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:53.912934 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:54.412606 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:54.912293 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:55.412985 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:55.912246 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:56.412251 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:56.912847 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:57.412257 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:57.913190 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:58.412788 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:58.912232 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:59.412611 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:05:59.912326 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:00.412834 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:00.912862 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:01.412287 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:01.912880 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:02.412992 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:02.912364 1420792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:06:03.145023 1420792 kubeadm.go:1088] duration metric: took 12.471530719s to wait for elevateKubeSystemPrivileges.
	I0131 02:06:03.145077 1420792 kubeadm.go:406] StartCluster complete in 24.412185618s
	I0131 02:06:03.145105 1420792 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:06:03.145252 1420792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:06:03.145760 1420792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:06:03.145999 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 02:06:03.146136 1420792 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0131 02:06:03.146262 1420792 addons.go:69] Setting ingress=true in profile "addons-165032"
	I0131 02:06:03.146272 1420792 addons.go:69] Setting ingress-dns=true in profile "addons-165032"
	I0131 02:06:03.146271 1420792 config.go:182] Loaded profile config "addons-165032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:06:03.146286 1420792 addons.go:234] Setting addon ingress-dns=true in "addons-165032"
	I0131 02:06:03.146281 1420792 addons.go:69] Setting default-storageclass=true in profile "addons-165032"
	I0131 02:06:03.146300 1420792 addons.go:69] Setting helm-tiller=true in profile "addons-165032"
	I0131 02:06:03.146314 1420792 addons.go:234] Setting addon helm-tiller=true in "addons-165032"
	I0131 02:06:03.146317 1420792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-165032"
	I0131 02:06:03.146323 1420792 addons.go:69] Setting registry=true in profile "addons-165032"
	I0131 02:06:03.146336 1420792 addons.go:234] Setting addon registry=true in "addons-165032"
	I0131 02:06:03.146388 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.146397 1420792 addons.go:69] Setting cloud-spanner=true in profile "addons-165032"
	I0131 02:06:03.146407 1420792 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-165032"
	I0131 02:06:03.146418 1420792 addons.go:69] Setting volumesnapshots=true in profile "addons-165032"
	I0131 02:06:03.146425 1420792 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-165032"
	I0131 02:06:03.146429 1420792 addons.go:234] Setting addon volumesnapshots=true in "addons-165032"
	I0131 02:06:03.146466 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.146409 1420792 addons.go:234] Setting addon cloud-spanner=true in "addons-165032"
	I0131 02:06:03.146601 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.146735 1420792 addons.go:69] Setting metrics-server=true in profile "addons-165032"
	I0131 02:06:03.146781 1420792 addons.go:234] Setting addon metrics-server=true in "addons-165032"
	I0131 02:06:03.146833 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.146867 1420792 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-165032"
	I0131 02:06:03.146891 1420792 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-165032"
	I0131 02:06:03.146892 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.146915 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.146923 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.146930 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.146939 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.146388 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.147115 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147148 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.147153 1420792 addons.go:69] Setting inspektor-gadget=true in profile "addons-165032"
	I0131 02:06:03.146258 1420792 addons.go:69] Setting yakd=true in profile "addons-165032"
	I0131 02:06:03.147174 1420792 addons.go:234] Setting addon inspektor-gadget=true in "addons-165032"
	I0131 02:06:03.146286 1420792 addons.go:234] Setting addon ingress=true in "addons-165032"
	I0131 02:06:03.146398 1420792 addons.go:69] Setting storage-provisioner=true in profile "addons-165032"
	I0131 02:06:03.147202 1420792 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-165032"
	I0131 02:06:03.147211 1420792 addons.go:234] Setting addon storage-provisioner=true in "addons-165032"
	I0131 02:06:03.146388 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.146875 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147226 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147240 1420792 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-165032"
	I0131 02:06:03.147252 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.147258 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.146292 1420792 addons.go:69] Setting gcp-auth=true in profile "addons-165032"
	I0131 02:06:03.147335 1420792 mustload.go:65] Loading cluster: addons-165032
	I0131 02:06:03.147285 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147355 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.147192 1420792 addons.go:234] Setting addon yakd=true in "addons-165032"
	I0131 02:06:03.147477 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.147544 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147587 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.147663 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147682 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.147683 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.147798 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.147826 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.147902 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.147952 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.148065 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.148101 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.148161 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.148180 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.148200 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.148232 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.148309 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.148347 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.148463 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.148799 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.148834 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.166652 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0131 02:06:03.166926 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0131 02:06:03.167027 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I0131 02:06:03.167201 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
	I0131 02:06:03.167483 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.167649 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.167723 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.168167 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.168191 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.168362 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.168382 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.168509 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.168527 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.168594 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.168653 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0131 02:06:03.168811 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.168909 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.168988 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.169046 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.169522 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.169562 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.170010 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.170050 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.170193 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.170211 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.170655 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.170689 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.171162 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.171246 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.171291 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.171854 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.171891 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.174886 1420792 config.go:182] Loaded profile config "addons-165032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:06:03.175260 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.175300 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.184844 1420792 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-165032"
	I0131 02:06:03.184899 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.185333 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.185378 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.187032 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.190266 1420792 addons.go:234] Setting addon default-storageclass=true in "addons-165032"
	I0131 02:06:03.194534 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.195044 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.195377 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.197472 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0131 02:06:03.198214 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.198916 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.198938 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.199322 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0131 02:06:03.199842 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.200337 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.200359 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.200736 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.201329 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.201372 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.201588 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0131 02:06:03.201742 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.201980 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.201988 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.202537 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.202557 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.202897 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.203463 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.203489 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.205118 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.207118 1420792 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0131 02:06:03.206123 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0131 02:06:03.206140 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I0131 02:06:03.208989 1420792 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0131 02:06:03.209005 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0131 02:06:03.209028 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.211966 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I0131 02:06:03.212603 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.213234 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.213237 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.213272 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.213275 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.213901 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.213973 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.213994 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.213909 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.214059 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.214122 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.214270 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.214417 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.214593 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.214821 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.214867 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.215723 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.216366 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.216387 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.216772 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.217347 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.217394 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.218898 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.219165 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.221044 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.224087 1420792 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0131 02:06:03.221894 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
	I0131 02:06:03.222123 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43049
	I0131 02:06:03.225677 1420792 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0131 02:06:03.225707 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0131 02:06:03.225734 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.226550 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.226813 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.227220 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.227247 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.227687 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.227706 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.227769 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.228320 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.228359 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.228424 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.228710 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.229752 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.230366 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.230389 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.230580 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.230783 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.230991 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.231131 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.231711 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.233821 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0131 02:06:03.232865 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0131 02:06:03.234803 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40791
	I0131 02:06:03.234834 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0131 02:06:03.235484 1420792 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0131 02:06:03.235497 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0131 02:06:03.235515 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.236783 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.237547 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.237996 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.238015 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.238468 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.238944 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.240226 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.240245 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.240858 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:03.241010 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.241429 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.241477 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.241680 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.241727 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.241769 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.241879 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.242028 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.242170 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.242574 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0131 02:06:03.242721 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.242836 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.243173 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.243613 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.243660 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.244134 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.244151 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.244563 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.244593 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.244627 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.244652 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0131 02:06:03.244975 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.245135 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.245172 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.245198 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
	I0131 02:06:03.245480 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.245505 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.245615 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.246005 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.246081 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.246104 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.246662 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.246746 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.246780 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.246989 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.248047 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0131 02:06:03.248067 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0131 02:06:03.248598 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.249080 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.249148 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.249214 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.249236 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.249385 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.251440 1420792 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0131 02:06:03.249855 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.250453 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.253342 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.254597 1420792 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0131 02:06:03.253512 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.253921 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.257718 1420792 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0131 02:06:03.256749 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0131 02:06:03.256790 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.257552 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.258335 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I0131 02:06:03.258909 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0131 02:06:03.259367 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.259511 1420792 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0131 02:06:03.259528 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0131 02:06:03.259549 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.260223 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.260288 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:03.260326 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:03.260330 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.261212 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.261232 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.261258 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.261363 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.261376 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.261649 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.262082 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.262408 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.262477 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.262747 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.263128 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.263843 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.264337 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.264363 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.264526 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.264827 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.265137 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.265336 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.266033 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.266102 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.266172 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.266350 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.268218 1420792 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0131 02:06:03.269790 1420792 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 02:06:03.269812 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 02:06:03.269835 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.268260 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.271592 1420792 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0131 02:06:03.272928 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I0131 02:06:03.273581 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0131 02:06:03.274556 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0131 02:06:03.274579 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.274638 1420792 out.go:177]   - Using image docker.io/registry:2.8.3
	I0131 02:06:03.274878 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0131 02:06:03.274902 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0131 02:06:03.275452 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.275545 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.276355 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0131 02:06:03.276806 1420792 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0131 02:06:03.277008 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.278516 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.277460 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.278533 1420792 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0131 02:06:03.278551 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0131 02:06:03.277499 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.278577 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.277566 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.277993 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.278646 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.278030 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.278997 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.279046 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.279298 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.280255 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.280266 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.280291 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.280328 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.280350 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.280370 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.280362 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.280413 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.280426 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.280452 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.280466 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.281000 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.281002 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.281018 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.281054 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.281068 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.281228 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.281279 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0131 02:06:03.281303 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.281520 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.281519 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.281854 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.282106 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.282829 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.282849 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.283315 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.283517 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.284207 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.286306 1420792 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0131 02:06:03.284785 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.285234 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.285266 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.285739 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.285895 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0131 02:06:03.285908 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.286563 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.289487 1420792 out.go:177]   - Using image docker.io/busybox:stable
	I0131 02:06:03.288141 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.288649 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.288680 1420792 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 02:06:03.288948 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.291283 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.291348 1420792 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0131 02:06:03.292785 1420792 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0131 02:06:03.292955 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 02:06:03.294378 1420792 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0131 02:06:03.294393 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0131 02:06:03.294400 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.294412 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.292976 1420792 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0131 02:06:03.292995 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0131 02:06:03.292835 1420792 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0131 02:06:03.293211 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.293704 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.295393 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0131 02:06:03.296308 1420792 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0131 02:06:03.296324 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0131 02:06:03.295907 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I0131 02:06:03.296341 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.296388 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.297944 1420792 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0131 02:06:03.297969 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0131 02:06:03.296439 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.296648 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.297060 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.297069 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:03.297986 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.298772 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.298806 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.299033 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.299050 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.299127 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.299259 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:03.299276 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:03.300963 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.301041 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.301091 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.301109 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.301124 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.301151 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.301423 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.301627 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.301915 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.301936 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.301968 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.302013 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.303084 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.303113 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.303125 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:03.303150 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.303241 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.303337 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.303337 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.303372 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.303409 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.305207 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0131 02:06:03.303436 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.303461 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.303648 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.303684 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:03.303704 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.304001 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.304288 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.305356 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.307015 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.307025 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0131 02:06:03.308764 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0131 02:06:03.307126 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.307163 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.307195 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.307682 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.308515 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:03.310662 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.310667 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0131 02:06:03.310815 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.310913 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.312771 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0131 02:06:03.312971 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.314645 1420792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:06:03.316257 1420792 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 02:06:03.315315 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.316304 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 02:06:03.317825 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.319306 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W0131 02:06:03.318621 1420792 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49570->192.168.39.232:22: read: connection reset by peer
	I0131 02:06:03.321122 1420792 retry.go:31] will retry after 184.89641ms: ssh: handshake failed: read tcp 192.168.39.1:49570->192.168.39.232:22: read: connection reset by peer
	I0131 02:06:03.321130 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0131 02:06:03.321292 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.321930 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.322794 1420792 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0131 02:06:03.324345 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0131 02:06:03.324365 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0131 02:06:03.322830 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.324384 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:03.324416 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.322999 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.324633 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.324803 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.327801 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.328297 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:03.328323 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:03.328511 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:03.328706 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:03.328960 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:03.329105 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:03.480559 1420792 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0131 02:06:03.480583 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0131 02:06:03.502513 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0131 02:06:03.513162 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0131 02:06:03.514548 1420792 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0131 02:06:03.514570 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0131 02:06:03.564305 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0131 02:06:03.564333 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0131 02:06:03.594901 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0131 02:06:03.624219 1420792 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0131 02:06:03.624267 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0131 02:06:03.658081 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 02:06:03.679472 1420792 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-165032" context rescaled to 1 replicas
	I0131 02:06:03.679532 1420792 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 02:06:03.682292 1420792 out.go:177] * Verifying Kubernetes components...
	I0131 02:06:03.684021 1420792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:06:03.692945 1420792 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 02:06:03.692967 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0131 02:06:03.701676 1420792 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0131 02:06:03.701698 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0131 02:06:03.714581 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 02:06:03.719187 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0131 02:06:03.721776 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0131 02:06:03.736085 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0131 02:06:03.736111 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0131 02:06:03.758887 1420792 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0131 02:06:03.758918 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0131 02:06:03.758971 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 02:06:03.837528 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0131 02:06:03.837554 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0131 02:06:03.897472 1420792 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0131 02:06:03.897500 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0131 02:06:03.943035 1420792 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0131 02:06:03.943059 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0131 02:06:03.951982 1420792 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0131 02:06:03.952007 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0131 02:06:04.021775 1420792 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 02:06:04.021802 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 02:06:04.032973 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0131 02:06:04.033002 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0131 02:06:04.073374 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0131 02:06:04.082091 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0131 02:06:04.087795 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0131 02:06:04.087819 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0131 02:06:04.094949 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0131 02:06:04.094972 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0131 02:06:04.122170 1420792 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 02:06:04.122201 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 02:06:04.161834 1420792 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0131 02:06:04.161864 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0131 02:06:04.170842 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0131 02:06:04.170876 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0131 02:06:04.241168 1420792 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0131 02:06:04.241196 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0131 02:06:04.277141 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0131 02:06:04.277172 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0131 02:06:04.313602 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 02:06:04.313872 1420792 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0131 02:06:04.313895 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0131 02:06:04.318603 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0131 02:06:04.318632 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0131 02:06:04.352585 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0131 02:06:04.388418 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0131 02:06:04.388457 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0131 02:06:04.418096 1420792 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0131 02:06:04.418123 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0131 02:06:04.430839 1420792 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0131 02:06:04.430864 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0131 02:06:04.472009 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0131 02:06:04.472041 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0131 02:06:04.510896 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0131 02:06:04.528388 1420792 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0131 02:06:04.528418 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0131 02:06:04.580827 1420792 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0131 02:06:04.580863 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0131 02:06:04.601806 1420792 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0131 02:06:04.601835 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0131 02:06:04.641243 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0131 02:06:04.686984 1420792 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0131 02:06:04.687015 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0131 02:06:04.729071 1420792 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0131 02:06:04.729132 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0131 02:06:04.767309 1420792 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0131 02:06:04.767352 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0131 02:06:04.786116 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0131 02:06:07.347865 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.845296607s)
	I0131 02:06:07.347955 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:07.347971 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:07.348416 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:07.348475 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:07.348488 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:07.348498 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:07.348512 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:07.348777 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:07.348800 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:10.493923 1420792 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0131 02:06:10.493985 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:10.496979 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:10.497477 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:10.497515 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:10.497699 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:10.497936 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:10.498157 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:10.498322 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:10.683939 1420792 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0131 02:06:10.825664 1420792 addons.go:234] Setting addon gcp-auth=true in "addons-165032"
	I0131 02:06:10.825723 1420792 host.go:66] Checking if "addons-165032" exists ...
	I0131 02:06:10.826053 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:10.826091 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:10.863308 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0131 02:06:10.863814 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:10.864378 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:10.864404 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:10.864768 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:10.865261 1420792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:06:10.865290 1420792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:06:10.881300 1420792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0131 02:06:10.881815 1420792 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:06:10.882461 1420792 main.go:141] libmachine: Using API Version  1
	I0131 02:06:10.882508 1420792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:06:10.882897 1420792 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:06:10.883147 1420792 main.go:141] libmachine: (addons-165032) Calling .GetState
	I0131 02:06:10.884813 1420792 main.go:141] libmachine: (addons-165032) Calling .DriverName
	I0131 02:06:10.885074 1420792 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0131 02:06:10.885105 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHHostname
	I0131 02:06:10.888086 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:10.888621 1420792 main.go:141] libmachine: (addons-165032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:87", ip: ""} in network mk-addons-165032: {Iface:virbr1 ExpiryTime:2024-01-31 03:05:21 +0000 UTC Type:0 Mac:52:54:00:9e:80:87 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:addons-165032 Clientid:01:52:54:00:9e:80:87}
	I0131 02:06:10.888642 1420792 main.go:141] libmachine: (addons-165032) DBG | domain addons-165032 has defined IP address 192.168.39.232 and MAC address 52:54:00:9e:80:87 in network mk-addons-165032
	I0131 02:06:10.888814 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHPort
	I0131 02:06:10.889022 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHKeyPath
	I0131 02:06:10.889205 1420792 main.go:141] libmachine: (addons-165032) Calling .GetSSHUsername
	I0131 02:06:10.889377 1420792 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/addons-165032/id_rsa Username:docker}
	I0131 02:06:12.065372 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.552173347s)
	I0131 02:06:12.065455 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065456 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.470519969s)
	I0131 02:06:12.065506 1420792 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.407385935s)
	I0131 02:06:12.065526 1420792 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 02:06:12.065509 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065542 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065570 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.350965948s)
	I0131 02:06:12.065469 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065597 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065608 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065626 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.346410483s)
	I0131 02:06:12.065544 1420792 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.381493426s)
	I0131 02:06:12.065659 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065670 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065688 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.343888277s)
	I0131 02:06:12.065705 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065714 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065761 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.306771194s)
	I0131 02:06:12.065776 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065784 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065862 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.992456339s)
	I0131 02:06:12.065880 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065890 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.065957 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.983833697s)
	I0131 02:06:12.065973 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.065982 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.066072 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.752442864s)
	I0131 02:06:12.066097 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.066108 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.068711 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.716068374s)
	I0131 02:06:12.068742 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	W0131 02:06:12.068771 1420792 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0131 02:06:12.068794 1420792 retry.go:31] will retry after 322.882213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0131 02:06:12.068800 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.068810 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.068821 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.068831 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.068873 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.557934111s)
	I0131 02:06:12.068895 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.068900 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.068907 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.068924 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.068941 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.068950 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.068965 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.069016 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.069044 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.069109 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.427819337s)
	I0131 02:06:12.069135 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.069157 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.069284 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.069297 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.069306 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.069322 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.069624 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.069642 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.069654 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.069674 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.069704 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.069716 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.069838 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.069848 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.069867 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.069875 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.069897 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070262 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070357 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.070367 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.070374 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070376 1420792 addons.go:470] Verifying addon metrics-server=true in "addons-165032"
	I0131 02:06:12.070409 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.070419 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.070436 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.070445 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.070510 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.070527 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.070566 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070662 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.070514 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070670 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.070686 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070694 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.070708 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.070715 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.070730 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.070740 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.070749 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.070816 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.070845 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.070853 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.070861 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.070874 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.070879 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.071604 1420792 node_ready.go:35] waiting up to 6m0s for node "addons-165032" to be "Ready" ...
	I0131 02:06:12.071801 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.071812 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.071901 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.071924 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.071933 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.072115 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.072813 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.072828 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.072839 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.072849 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.072926 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.072964 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.072974 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.072986 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.073014 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.073044 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.076190 1420792 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-165032 service yakd-dashboard -n yakd-dashboard
	
	I0131 02:06:12.073333 1420792 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0131 02:06:12.073363 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.073394 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.073522 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.073547 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.073562 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.073924 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.073966 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.077874 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.077883 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.077899 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.077911 1420792 addons.go:470] Verifying addon registry=true in "addons-165032"
	I0131 02:06:12.077916 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.077928 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.077959 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.079645 1420792 out.go:177] * Verifying registry addon...
	I0131 02:06:12.077912 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.078288 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.078303 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.081458 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.081491 1420792 addons.go:470] Verifying addon ingress=true in "addons-165032"
	I0131 02:06:12.083160 1420792 out.go:177] * Verifying ingress addon...
	I0131 02:06:12.082287 1420792 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0131 02:06:12.085023 1420792 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0131 02:06:12.098860 1420792 node_ready.go:49] node "addons-165032" has status "Ready":"True"
	I0131 02:06:12.098891 1420792 node_ready.go:38] duration metric: took 27.267812ms waiting for node "addons-165032" to be "Ready" ...
	I0131 02:06:12.098905 1420792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:06:12.101088 1420792 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0131 02:06:12.101110 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:12.105868 1420792 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0131 02:06:12.105894 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:12.117133 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.117177 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.117229 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.117259 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.117468 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.117491 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.117591 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.117607 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	W0131 02:06:12.117728 1420792 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0131 02:06:12.126716 1420792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:12.392725 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0131 02:06:12.523926 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.737750704s)
	I0131 02:06:12.523996 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.524011 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.524007 1420792 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.638898882s)
	I0131 02:06:12.525789 1420792 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0131 02:06:12.524465 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.524494 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.527235 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.527250 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:12.527266 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:12.528755 1420792 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0131 02:06:12.527550 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:12.527572 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:12.530550 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:12.530576 1420792 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-165032"
	I0131 02:06:12.530592 1420792 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0131 02:06:12.530609 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0131 02:06:12.532173 1420792 out.go:177] * Verifying csi-hostpath-driver addon...
	I0131 02:06:12.534394 1420792 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0131 02:06:12.566456 1420792 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0131 02:06:12.566496 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:12.633253 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:12.633264 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:12.648789 1420792 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0131 02:06:12.648823 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0131 02:06:12.755370 1420792 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0131 02:06:12.755402 1420792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0131 02:06:13.014123 1420792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0131 02:06:13.172362 1420792 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0131 02:06:13.172389 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:13.212200 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:13.212441 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:13.551420 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:13.622689 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:13.622845 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:14.074981 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:14.120362 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:14.120493 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:14.145752 1420792 pod_ready.go:102] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:14.544223 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:14.622634 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:14.624250 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:15.044128 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:15.111249 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:15.126406 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:15.289430 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.896631457s)
	I0131 02:06:15.289487 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:15.289502 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:15.289869 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:15.289895 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:15.289907 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:15.289917 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:15.290244 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:15.290261 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:15.290275 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:15.586251 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:15.671611 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:15.671769 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:15.760273 1420792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.746107841s)
	I0131 02:06:15.760327 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:15.760341 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:15.760689 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:15.760711 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:15.760731 1420792 main.go:141] libmachine: Making call to close driver server
	I0131 02:06:15.760740 1420792 main.go:141] libmachine: (addons-165032) Calling .Close
	I0131 02:06:15.761014 1420792 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:06:15.761153 1420792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:06:15.761047 1420792 main.go:141] libmachine: (addons-165032) DBG | Closing plugin on server side
	I0131 02:06:15.763306 1420792 addons.go:470] Verifying addon gcp-auth=true in "addons-165032"
	I0131 02:06:15.765315 1420792 out.go:177] * Verifying gcp-auth addon...
	I0131 02:06:15.767622 1420792 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0131 02:06:15.793508 1420792 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0131 02:06:15.793534 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:16.044577 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:16.092867 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:16.097010 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:16.272432 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:16.541145 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:16.591318 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:16.591946 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:16.636063 1420792 pod_ready.go:102] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:16.773149 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:17.042040 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:17.089482 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:17.090861 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:17.271664 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:17.541315 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:17.589763 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:17.589841 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:17.773284 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:18.040902 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:18.090821 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:18.090938 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:18.272624 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:18.540110 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:18.591476 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:18.591513 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:18.772256 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:19.044603 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:19.090343 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:19.090640 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:19.133255 1420792 pod_ready.go:102] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:19.272058 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:19.541872 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:19.603336 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:19.628170 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:19.771914 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:20.039917 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:20.091333 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:20.098772 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:20.280444 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:20.541474 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:20.591564 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:20.592686 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:20.771736 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:21.040869 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:21.097014 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:21.111510 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:21.133929 1420792 pod_ready.go:102] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:21.274515 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:21.547868 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:21.592663 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:21.596877 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:21.777789 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:22.040216 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:22.093625 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:22.095908 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:22.279153 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:22.541571 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:22.592655 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:22.592868 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:22.772162 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:23.045277 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:23.093030 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:23.094272 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:23.140233 1420792 pod_ready.go:102] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:23.295240 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:23.966930 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:23.970356 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:23.970627 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:23.970944 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:24.055346 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:24.091865 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:24.092715 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:24.272189 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:24.541724 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:24.590552 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:24.593358 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:24.771206 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:25.042007 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:25.092681 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:25.092694 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:25.148603 1420792 pod_ready.go:102] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:25.281619 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:25.586663 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:25.594029 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:25.594587 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:25.666403 1420792 pod_ready.go:92] pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:25.666439 1420792 pod_ready.go:81] duration metric: took 13.539693139s waiting for pod "coredns-5dd5756b68-fw9nr" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.666453 1420792 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.697569 1420792 pod_ready.go:92] pod "etcd-addons-165032" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:25.697598 1420792 pod_ready.go:81] duration metric: took 31.137436ms waiting for pod "etcd-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.697607 1420792 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.712739 1420792 pod_ready.go:92] pod "kube-apiserver-addons-165032" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:25.712772 1420792 pod_ready.go:81] duration metric: took 15.157403ms waiting for pod "kube-apiserver-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.712788 1420792 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.734966 1420792 pod_ready.go:92] pod "kube-controller-manager-addons-165032" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:25.734996 1420792 pod_ready.go:81] duration metric: took 22.199408ms waiting for pod "kube-controller-manager-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.735011 1420792 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88dcq" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.739735 1420792 pod_ready.go:92] pod "kube-proxy-88dcq" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:25.739760 1420792 pod_ready.go:81] duration metric: took 4.741221ms waiting for pod "kube-proxy-88dcq" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.739772 1420792 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:25.771941 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:26.033570 1420792 pod_ready.go:92] pod "kube-scheduler-addons-165032" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:26.033608 1420792 pod_ready.go:81] duration metric: took 293.826448ms waiting for pod "kube-scheduler-addons-165032" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:26.033623 1420792 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:26.041103 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:26.091588 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:26.091608 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:26.270803 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:26.540171 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:26.589685 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:26.591640 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:26.771890 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:27.041435 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:27.098464 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:27.100923 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:27.272561 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:27.542169 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:27.590356 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:27.590530 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:27.771248 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:28.039899 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:28.043053 1420792 pod_ready.go:102] pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:28.089052 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:28.089837 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:28.271809 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:28.544236 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:28.591207 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:28.593541 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:28.771605 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:29.041477 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:29.091396 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:29.091484 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:29.271740 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:29.542033 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:29.590509 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:29.590581 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:29.772494 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:30.042198 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:30.045053 1420792 pod_ready.go:102] pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:30.091866 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:30.092067 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:30.272149 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:30.541161 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:30.593385 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:30.594317 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:30.771939 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:31.041701 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:31.090066 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:31.092422 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:31.272492 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:31.541910 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:31.590390 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:31.591361 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:31.772033 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:32.042013 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:32.090741 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:32.091448 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:32.271883 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:32.542826 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:32.545417 1420792 pod_ready.go:102] pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:32.593445 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:32.594162 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:33.110038 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:33.110902 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:33.115407 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:33.117692 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:33.271833 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:33.543646 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:33.593150 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:33.595623 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:33.771891 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:34.039424 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:34.091525 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:34.091705 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:34.273315 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:34.542161 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:34.591480 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:34.594151 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:34.773783 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:35.042835 1420792 pod_ready.go:102] pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:35.043914 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:35.089547 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:35.092623 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:35.271841 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:35.542575 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:35.590305 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:35.590693 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:35.772107 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:36.041532 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:36.091922 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:36.093264 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:36.271246 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:36.550283 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:36.559270 1420792 pod_ready.go:92] pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace has status "Ready":"True"
	I0131 02:06:36.559298 1420792 pod_ready.go:81] duration metric: took 10.52566659s waiting for pod "metrics-server-69cf46c98-wwrv8" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:36.559313 1420792 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace to be "Ready" ...
	I0131 02:06:36.598728 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:36.599968 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:36.772093 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:37.041124 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:37.091285 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:37.091642 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:37.271835 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:37.541458 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:37.590618 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:37.590652 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:37.771706 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:38.039908 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:38.089737 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:38.092270 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:38.272541 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:38.540238 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:38.565541 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:38.590297 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:38.591008 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:38.771080 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:39.041282 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:39.091617 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:39.092037 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:39.271684 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:39.540374 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:39.590057 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:39.590892 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:39.771487 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:40.041113 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:40.090177 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:40.092006 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:40.271774 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:40.540040 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:40.567202 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:40.590299 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:40.591088 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:40.771455 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:41.040443 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:41.090292 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:41.090605 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:41.271989 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:41.542018 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:41.590465 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:41.600673 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:41.773416 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:42.041155 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:42.090218 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:42.090650 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:42.272335 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:42.540970 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:42.567354 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:42.592642 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:42.592783 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:42.773770 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:43.040733 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:43.089974 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:43.090496 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:43.271211 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:43.540487 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:43.598539 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:43.600449 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:43.772302 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:44.040972 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:44.089513 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:44.091905 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:44.272374 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:44.540753 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:44.590591 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:44.591933 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:44.771228 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:45.040904 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:45.067351 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:45.089424 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:45.092990 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:45.382678 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:45.539876 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:45.596363 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:45.597464 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:45.771271 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:46.041665 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:46.089152 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:46.094723 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:46.270886 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:46.540030 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:46.591553 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:46.595145 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:46.771875 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:47.040205 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:47.068895 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:47.091748 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:47.093770 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:47.272236 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:47.545919 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:47.593085 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:47.595137 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:47.771980 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:48.040231 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:48.089118 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:48.091414 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:48.271599 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:48.540351 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:48.589280 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:48.592277 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:48.771345 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:49.040896 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:49.094656 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:49.110086 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:49.272327 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:49.541550 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:49.569715 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:49.590171 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:49.594489 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:49.772636 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:50.040410 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:50.091523 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:50.092662 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:50.271498 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:50.544404 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:50.590988 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:50.591237 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:50.771467 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:51.040897 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:51.095374 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:51.096756 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:51.272133 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:51.540390 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:51.590516 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:51.594269 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:51.771759 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:52.041019 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:52.066506 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:52.089854 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:52.091503 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:52.561467 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:52.562504 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:52.593700 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:52.594400 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:52.772570 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:53.040812 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:53.090063 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:53.090131 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:53.272333 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:53.540602 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:53.592403 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:53.598689 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:53.772201 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:54.040768 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:54.069168 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:54.089854 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:54.090595 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:54.273009 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:54.540454 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:54.592517 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:54.593463 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:54.771394 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:55.040722 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:55.089298 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:55.089751 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:55.272564 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:55.540248 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:55.590193 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:55.590825 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:55.772183 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:56.042761 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:56.090435 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:56.092958 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:56.272218 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:56.544115 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:56.571216 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:56.590780 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:56.591290 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:56.771806 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:57.041813 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:57.090037 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:57.090421 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:57.271659 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:57.540723 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:57.591407 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:57.592591 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:57.771587 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:58.040812 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:58.090924 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:58.091775 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:58.271888 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:58.542681 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:58.590411 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:58.590570 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:58.771813 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:59.116250 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:59.118216 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:59.118983 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:06:59.120820 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:59.271671 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:06:59.541479 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:06:59.590398 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:06:59.591258 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:06:59.771794 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:00.043340 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:00.091424 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:00.092877 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:00.271512 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:00.542931 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:00.591585 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:00.592024 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:00.771155 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:01.040970 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:01.089370 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:01.090425 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:01.272508 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:01.539974 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:01.568228 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:07:01.589761 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:01.591064 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:01.771681 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:02.043275 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:02.090901 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:02.092576 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:02.272111 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:02.546666 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:02.596747 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:02.596926 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:02.772361 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:03.040843 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:03.090125 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:03.090124 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:03.271885 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:03.542090 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:03.592170 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:03.598938 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:03.771931 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:04.040903 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:04.066376 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:07:04.089337 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:04.089603 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:04.272136 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:04.541013 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:04.590595 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:04.591730 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:04.772206 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:05.040317 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:05.091077 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:05.092448 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:05.274947 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:05.589138 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:05.591348 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:05.592340 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:05.771862 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:06.039966 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:06.067476 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:07:06.089785 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:06.090394 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:06.271975 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:06.541159 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:06.590475 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:06.592240 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:06.771002 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:07.040375 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:07.097933 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:07.098049 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:07.271253 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:07.540090 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:07.594416 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:07.595626 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:07.771844 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:08.039994 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:08.069344 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:07:08.089731 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:08.092214 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:08.272412 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:08.544944 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:08.595078 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:08.596379 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:09.198500 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:09.201855 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:09.206853 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:09.206999 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:09.272127 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:09.540969 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:09.593140 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:09.593301 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:09.773661 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:10.040209 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:10.090227 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:10.091705 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:10.271965 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:10.541713 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:10.567269 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:07:10.593401 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:10.593698 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:10.772186 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:11.040567 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:11.089907 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:11.090125 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:11.272558 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:11.543141 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:11.589394 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:11.589948 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:11.771500 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:12.039412 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:12.090725 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:12.091290 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:12.271829 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:12.541316 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:12.569083 1420792 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"False"
	I0131 02:07:12.589509 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:12.592477 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:12.771841 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:13.039884 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:13.090573 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:13.092640 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:13.271756 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:13.540688 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:13.565752 1420792 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace has status "Ready":"True"
	I0131 02:07:13.565783 1420792 pod_ready.go:81] duration metric: took 37.006461667s waiting for pod "nvidia-device-plugin-daemonset-kqz46" in "kube-system" namespace to be "Ready" ...
	I0131 02:07:13.565866 1420792 pod_ready.go:38] duration metric: took 1m1.466890804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:07:13.565912 1420792 api_server.go:52] waiting for apiserver process to appear ...
	I0131 02:07:13.565960 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 02:07:13.566058 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 02:07:13.588923 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:13.592133 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:13.616925 1420792 cri.go:89] found id: "4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b"
	I0131 02:07:13.616986 1420792 cri.go:89] found id: ""
	I0131 02:07:13.617078 1420792 logs.go:284] 1 containers: [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b]
	I0131 02:07:13.617247 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:13.621510 1420792 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 02:07:13.621580 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 02:07:13.668357 1420792 cri.go:89] found id: "e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814"
	I0131 02:07:13.668391 1420792 cri.go:89] found id: ""
	I0131 02:07:13.668401 1420792 logs.go:284] 1 containers: [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814]
	I0131 02:07:13.668468 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:13.672449 1420792 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 02:07:13.672514 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 02:07:13.709705 1420792 cri.go:89] found id: "4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71"
	I0131 02:07:13.709734 1420792 cri.go:89] found id: ""
	I0131 02:07:13.709746 1420792 logs.go:284] 1 containers: [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71]
	I0131 02:07:13.709800 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:13.713651 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 02:07:13.713734 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 02:07:13.751299 1420792 cri.go:89] found id: "7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77"
	I0131 02:07:13.751327 1420792 cri.go:89] found id: ""
	I0131 02:07:13.751338 1420792 logs.go:284] 1 containers: [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77]
	I0131 02:07:13.751398 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:13.756537 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 02:07:13.756625 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 02:07:13.771747 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:13.797679 1420792 cri.go:89] found id: "ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f"
	I0131 02:07:13.797710 1420792 cri.go:89] found id: ""
	I0131 02:07:13.797720 1420792 logs.go:284] 1 containers: [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f]
	I0131 02:07:13.797787 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:13.801552 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 02:07:13.801626 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 02:07:13.842136 1420792 cri.go:89] found id: "a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c"
	I0131 02:07:13.842161 1420792 cri.go:89] found id: ""
	I0131 02:07:13.842169 1420792 logs.go:284] 1 containers: [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c]
	I0131 02:07:13.842251 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:13.846783 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 02:07:13.846856 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 02:07:14.040448 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:14.090523 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:14.092142 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:14.138426 1420792 cri.go:89] found id: ""
	I0131 02:07:14.138459 1420792 logs.go:284] 0 containers: []
	W0131 02:07:14.138469 1420792 logs.go:286] No container was found matching "kindnet"
	I0131 02:07:14.138492 1420792 logs.go:123] Gathering logs for kubelet ...
	I0131 02:07:14.138513 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 02:07:14.264467 1420792 logs.go:123] Gathering logs for kube-apiserver [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b] ...
	I0131 02:07:14.264515 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b"
	I0131 02:07:14.272427 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:14.486679 1420792 logs.go:123] Gathering logs for coredns [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71] ...
	I0131 02:07:14.486724 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71"
	I0131 02:07:14.540020 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:14.589059 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:14.590398 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:14.666632 1420792 logs.go:123] Gathering logs for kube-proxy [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f] ...
	I0131 02:07:14.666692 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f"
	I0131 02:07:14.732353 1420792 logs.go:123] Gathering logs for kube-controller-manager [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c] ...
	I0131 02:07:14.732392 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c"
	I0131 02:07:14.771132 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:14.810884 1420792 logs.go:123] Gathering logs for dmesg ...
	I0131 02:07:14.810931 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 02:07:14.847537 1420792 logs.go:123] Gathering logs for describe nodes ...
	I0131 02:07:14.847569 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 02:07:15.043883 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:15.062153 1420792 logs.go:123] Gathering logs for etcd [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814] ...
	I0131 02:07:15.062214 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814"
	I0131 02:07:15.090992 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:15.093150 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:15.161480 1420792 logs.go:123] Gathering logs for kube-scheduler [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77] ...
	I0131 02:07:15.161530 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77"
	I0131 02:07:15.212798 1420792 logs.go:123] Gathering logs for CRI-O ...
	I0131 02:07:15.212854 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 02:07:15.271301 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:15.545221 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:15.591784 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:15.592265 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:15.732402 1420792 logs.go:123] Gathering logs for container status ...
	I0131 02:07:15.732458 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 02:07:15.771121 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:16.040964 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:16.090993 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:16.091453 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:16.271895 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:16.543331 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:16.590240 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:16.590764 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:16.772452 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:17.040891 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:17.089701 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:17.090360 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:17.272325 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:17.542328 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:17.589655 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:17.590590 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:17.772094 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:18.041062 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:18.089243 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:18.089640 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:18.272298 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:18.316795 1420792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:07:18.334247 1420792 api_server.go:72] duration metric: took 1m14.654672142s to wait for apiserver process to appear ...
	I0131 02:07:18.334281 1420792 api_server.go:88] waiting for apiserver healthz status ...
	I0131 02:07:18.334328 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 02:07:18.334392 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 02:07:18.376034 1420792 cri.go:89] found id: "4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b"
	I0131 02:07:18.376067 1420792 cri.go:89] found id: ""
	I0131 02:07:18.376079 1420792 logs.go:284] 1 containers: [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b]
	I0131 02:07:18.376144 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:18.380140 1420792 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 02:07:18.380212 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 02:07:18.418808 1420792 cri.go:89] found id: "e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814"
	I0131 02:07:18.418839 1420792 cri.go:89] found id: ""
	I0131 02:07:18.418851 1420792 logs.go:284] 1 containers: [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814]
	I0131 02:07:18.418933 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:18.423119 1420792 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 02:07:18.423195 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 02:07:18.483683 1420792 cri.go:89] found id: "4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71"
	I0131 02:07:18.483712 1420792 cri.go:89] found id: ""
	I0131 02:07:18.483723 1420792 logs.go:284] 1 containers: [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71]
	I0131 02:07:18.483825 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:18.506557 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 02:07:18.506653 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 02:07:18.541845 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:18.592156 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:18.592673 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:18.716866 1420792 cri.go:89] found id: "7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77"
	I0131 02:07:18.716899 1420792 cri.go:89] found id: ""
	I0131 02:07:18.716910 1420792 logs.go:284] 1 containers: [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77]
	I0131 02:07:18.716985 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:18.739970 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 02:07:18.740123 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 02:07:18.772027 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:18.879938 1420792 cri.go:89] found id: "ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f"
	I0131 02:07:18.879970 1420792 cri.go:89] found id: ""
	I0131 02:07:18.879983 1420792 logs.go:284] 1 containers: [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f]
	I0131 02:07:18.880052 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:18.891669 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 02:07:18.891763 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 02:07:18.974659 1420792 cri.go:89] found id: "a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c"
	I0131 02:07:18.974683 1420792 cri.go:89] found id: ""
	I0131 02:07:18.974694 1420792 logs.go:284] 1 containers: [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c]
	I0131 02:07:18.974763 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:18.985169 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 02:07:18.985253 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 02:07:19.043035 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:19.064482 1420792 cri.go:89] found id: ""
	I0131 02:07:19.064509 1420792 logs.go:284] 0 containers: []
	W0131 02:07:19.064520 1420792 logs.go:286] No container was found matching "kindnet"
	I0131 02:07:19.064532 1420792 logs.go:123] Gathering logs for dmesg ...
	I0131 02:07:19.064554 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 02:07:19.083128 1420792 logs.go:123] Gathering logs for kube-apiserver [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b] ...
	I0131 02:07:19.083162 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b"
	I0131 02:07:19.089845 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:19.091908 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:19.162884 1420792 logs.go:123] Gathering logs for etcd [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814] ...
	I0131 02:07:19.162937 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814"
	I0131 02:07:19.291238 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:19.299584 1420792 logs.go:123] Gathering logs for coredns [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71] ...
	I0131 02:07:19.299676 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71"
	I0131 02:07:19.369892 1420792 logs.go:123] Gathering logs for kube-controller-manager [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c] ...
	I0131 02:07:19.369946 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c"
	I0131 02:07:19.435431 1420792 logs.go:123] Gathering logs for CRI-O ...
	I0131 02:07:19.435472 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 02:07:19.540644 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:19.590106 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:19.590401 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:19.861016 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:19.930733 1420792 logs.go:123] Gathering logs for container status ...
	I0131 02:07:19.930804 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 02:07:20.020015 1420792 logs.go:123] Gathering logs for kubelet ...
	I0131 02:07:20.020072 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 02:07:20.042994 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:20.094355 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:20.097976 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:20.129340 1420792 logs.go:123] Gathering logs for describe nodes ...
	I0131 02:07:20.129385 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 02:07:20.280343 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:20.346352 1420792 logs.go:123] Gathering logs for kube-scheduler [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77] ...
	I0131 02:07:20.346395 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77"
	I0131 02:07:20.438361 1420792 logs.go:123] Gathering logs for kube-proxy [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f] ...
	I0131 02:07:20.438405 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f"
	I0131 02:07:20.540322 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:20.590405 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:20.591624 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:20.772390 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:21.040414 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:21.093833 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:21.094724 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:21.272674 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:21.544265 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:21.589516 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:21.589515 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:21.772589 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:22.040548 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:22.089282 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:22.089877 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:22.271535 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:22.540928 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:22.590049 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:22.590974 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:22.773980 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:22.996085 1420792 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 02:07:23.003113 1420792 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 02:07:23.004269 1420792 api_server.go:141] control plane version: v1.28.4
	I0131 02:07:23.004294 1420792 api_server.go:131] duration metric: took 4.670005212s to wait for apiserver health ...
	I0131 02:07:23.004324 1420792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 02:07:23.004349 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 02:07:23.004399 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 02:07:23.040286 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:23.091488 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:23.094784 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:23.147179 1420792 cri.go:89] found id: "4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b"
	I0131 02:07:23.147215 1420792 cri.go:89] found id: ""
	I0131 02:07:23.147228 1420792 logs.go:284] 1 containers: [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b]
	I0131 02:07:23.147297 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:23.155306 1420792 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 02:07:23.155385 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 02:07:23.256064 1420792 cri.go:89] found id: "e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814"
	I0131 02:07:23.256098 1420792 cri.go:89] found id: ""
	I0131 02:07:23.256109 1420792 logs.go:284] 1 containers: [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814]
	I0131 02:07:23.256175 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:23.279727 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:23.283445 1420792 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 02:07:23.283529 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 02:07:23.368427 1420792 cri.go:89] found id: "4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71"
	I0131 02:07:23.368455 1420792 cri.go:89] found id: ""
	I0131 02:07:23.368464 1420792 logs.go:284] 1 containers: [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71]
	I0131 02:07:23.368533 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:23.380320 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 02:07:23.380391 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 02:07:23.432934 1420792 cri.go:89] found id: "7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77"
	I0131 02:07:23.432957 1420792 cri.go:89] found id: ""
	I0131 02:07:23.432966 1420792 logs.go:284] 1 containers: [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77]
	I0131 02:07:23.433018 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:23.437748 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 02:07:23.437824 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 02:07:23.516297 1420792 cri.go:89] found id: "ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f"
	I0131 02:07:23.516372 1420792 cri.go:89] found id: ""
	I0131 02:07:23.516382 1420792 logs.go:284] 1 containers: [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f]
	I0131 02:07:23.516453 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:23.523602 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 02:07:23.523684 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 02:07:23.540571 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:23.584159 1420792 cri.go:89] found id: "a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c"
	I0131 02:07:23.584194 1420792 cri.go:89] found id: ""
	I0131 02:07:23.584206 1420792 logs.go:284] 1 containers: [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c]
	I0131 02:07:23.584273 1420792 ssh_runner.go:195] Run: which crictl
	I0131 02:07:23.591213 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:23.594095 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:23.594627 1420792 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 02:07:23.594697 1420792 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 02:07:23.660366 1420792 cri.go:89] found id: ""
	I0131 02:07:23.660401 1420792 logs.go:284] 0 containers: []
	W0131 02:07:23.660409 1420792 logs.go:286] No container was found matching "kindnet"
	I0131 02:07:23.660419 1420792 logs.go:123] Gathering logs for dmesg ...
	I0131 02:07:23.660436 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 02:07:23.696542 1420792 logs.go:123] Gathering logs for describe nodes ...
	I0131 02:07:23.696577 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 02:07:23.771465 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:23.888859 1420792 logs.go:123] Gathering logs for coredns [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71] ...
	I0131 02:07:23.888918 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71"
	I0131 02:07:23.955012 1420792 logs.go:123] Gathering logs for CRI-O ...
	I0131 02:07:23.955063 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 02:07:24.064828 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:24.093619 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:24.094469 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:24.271777 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:24.301444 1420792 logs.go:123] Gathering logs for container status ...
	I0131 02:07:24.301497 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 02:07:24.395808 1420792 logs.go:123] Gathering logs for kube-controller-manager [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c] ...
	I0131 02:07:24.395849 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c"
	I0131 02:07:24.504203 1420792 logs.go:123] Gathering logs for kubelet ...
	I0131 02:07:24.504242 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 02:07:24.575583 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:24.584796 1420792 logs.go:123] Gathering logs for kube-apiserver [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b] ...
	I0131 02:07:24.584840 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b"
	I0131 02:07:24.592031 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:24.592986 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:24.694749 1420792 logs.go:123] Gathering logs for etcd [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814] ...
	I0131 02:07:24.694788 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814"
	I0131 02:07:24.772234 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:24.784934 1420792 logs.go:123] Gathering logs for kube-scheduler [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77] ...
	I0131 02:07:24.784980 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77"
	I0131 02:07:24.827815 1420792 logs.go:123] Gathering logs for kube-proxy [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f] ...
	I0131 02:07:24.827855 1420792 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f"
	I0131 02:07:25.040482 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:25.090358 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:25.092163 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0131 02:07:25.271827 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:25.540390 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:25.591246 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:25.592064 1420792 kapi.go:107] duration metric: took 1m13.509773226s to wait for kubernetes.io/minikube-addons=registry ...
	I0131 02:07:25.772598 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:26.041344 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:26.089649 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:26.271750 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:26.540586 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:26.590491 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:26.777715 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:27.041039 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:27.090839 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:27.275678 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:27.427014 1420792 system_pods.go:59] 18 kube-system pods found
	I0131 02:07:27.427070 1420792 system_pods.go:61] "coredns-5dd5756b68-fw9nr" [60d27c57-51aa-4f6a-8b19-ec851733caa4] Running
	I0131 02:07:27.427080 1420792 system_pods.go:61] "csi-hostpath-attacher-0" [f99a2ec9-a8df-4853-b6ca-12d41728a7a3] Running
	I0131 02:07:27.427089 1420792 system_pods.go:61] "csi-hostpath-resizer-0" [58092fe0-2f68-4210-a958-c32989f040ee] Running
	I0131 02:07:27.427099 1420792 system_pods.go:61] "csi-hostpathplugin-bg884" [130affe4-3ea3-43ff-9d25-3e8a1932fc26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0131 02:07:27.427111 1420792 system_pods.go:61] "etcd-addons-165032" [79e398ca-d2c7-440c-9403-bce6f8f1a3a9] Running
	I0131 02:07:27.427122 1420792 system_pods.go:61] "kube-apiserver-addons-165032" [66347b6f-9b8c-49e6-8c52-0ee571728707] Running
	I0131 02:07:27.427126 1420792 system_pods.go:61] "kube-controller-manager-addons-165032" [beceed79-b93c-433e-ba39-52676b527ff8] Running
	I0131 02:07:27.427132 1420792 system_pods.go:61] "kube-ingress-dns-minikube" [ab656846-5129-406d-849d-6e25c96c7b4d] Running
	I0131 02:07:27.427136 1420792 system_pods.go:61] "kube-proxy-88dcq" [f201fca4-a9ad-4785-aba1-79c3071c7ac5] Running
	I0131 02:07:27.427141 1420792 system_pods.go:61] "kube-scheduler-addons-165032" [b0a02ec7-0846-45f5-9d4e-f0f4b925ba82] Running
	I0131 02:07:27.427146 1420792 system_pods.go:61] "metrics-server-69cf46c98-wwrv8" [c01fcfa3-b4c2-4ea7-a9f3-ef80086f017c] Running
	I0131 02:07:27.427153 1420792 system_pods.go:61] "nvidia-device-plugin-daemonset-kqz46" [bb5127c3-731f-49ab-8391-4b9b2e955e8f] Running
	I0131 02:07:27.427161 1420792 system_pods.go:61] "registry-c9zsd" [91cd4aa6-c504-47cc-a6f4-cb3df86d81c1] Running
	I0131 02:07:27.427168 1420792 system_pods.go:61] "registry-proxy-xwffk" [1e6273d6-5a09-4ec3-aa41-8745cb15c2f5] Running
	I0131 02:07:27.427175 1420792 system_pods.go:61] "snapshot-controller-58dbcc7b99-b6zts" [f8a78017-f415-4735-b712-b1364b10828f] Running
	I0131 02:07:27.427185 1420792 system_pods.go:61] "snapshot-controller-58dbcc7b99-s9n8s" [1efb494d-0076-4791-b709-aa62eb586394] Running
	I0131 02:07:27.427194 1420792 system_pods.go:61] "storage-provisioner" [c1d21716-a761-4474-9dd0-894af6207a1f] Running
	I0131 02:07:27.427201 1420792 system_pods.go:61] "tiller-deploy-7b677967b9-9cnkj" [9ce39066-3c40-4a16-bb14-914a5acdcf78] Running
	I0131 02:07:27.427214 1420792 system_pods.go:74] duration metric: took 4.422881485s to wait for pod list to return data ...
	I0131 02:07:27.427227 1420792 default_sa.go:34] waiting for default service account to be created ...
	I0131 02:07:27.431420 1420792 default_sa.go:45] found service account: "default"
	I0131 02:07:27.431444 1420792 default_sa.go:55] duration metric: took 4.208165ms for default service account to be created ...
	I0131 02:07:27.431452 1420792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 02:07:27.440248 1420792 system_pods.go:86] 18 kube-system pods found
	I0131 02:07:27.440275 1420792 system_pods.go:89] "coredns-5dd5756b68-fw9nr" [60d27c57-51aa-4f6a-8b19-ec851733caa4] Running
	I0131 02:07:27.440281 1420792 system_pods.go:89] "csi-hostpath-attacher-0" [f99a2ec9-a8df-4853-b6ca-12d41728a7a3] Running
	I0131 02:07:27.440286 1420792 system_pods.go:89] "csi-hostpath-resizer-0" [58092fe0-2f68-4210-a958-c32989f040ee] Running
	I0131 02:07:27.440292 1420792 system_pods.go:89] "csi-hostpathplugin-bg884" [130affe4-3ea3-43ff-9d25-3e8a1932fc26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0131 02:07:27.440298 1420792 system_pods.go:89] "etcd-addons-165032" [79e398ca-d2c7-440c-9403-bce6f8f1a3a9] Running
	I0131 02:07:27.440304 1420792 system_pods.go:89] "kube-apiserver-addons-165032" [66347b6f-9b8c-49e6-8c52-0ee571728707] Running
	I0131 02:07:27.440308 1420792 system_pods.go:89] "kube-controller-manager-addons-165032" [beceed79-b93c-433e-ba39-52676b527ff8] Running
	I0131 02:07:27.440312 1420792 system_pods.go:89] "kube-ingress-dns-minikube" [ab656846-5129-406d-849d-6e25c96c7b4d] Running
	I0131 02:07:27.440315 1420792 system_pods.go:89] "kube-proxy-88dcq" [f201fca4-a9ad-4785-aba1-79c3071c7ac5] Running
	I0131 02:07:27.440319 1420792 system_pods.go:89] "kube-scheduler-addons-165032" [b0a02ec7-0846-45f5-9d4e-f0f4b925ba82] Running
	I0131 02:07:27.440324 1420792 system_pods.go:89] "metrics-server-69cf46c98-wwrv8" [c01fcfa3-b4c2-4ea7-a9f3-ef80086f017c] Running
	I0131 02:07:27.440332 1420792 system_pods.go:89] "nvidia-device-plugin-daemonset-kqz46" [bb5127c3-731f-49ab-8391-4b9b2e955e8f] Running
	I0131 02:07:27.440338 1420792 system_pods.go:89] "registry-c9zsd" [91cd4aa6-c504-47cc-a6f4-cb3df86d81c1] Running
	I0131 02:07:27.440344 1420792 system_pods.go:89] "registry-proxy-xwffk" [1e6273d6-5a09-4ec3-aa41-8745cb15c2f5] Running
	I0131 02:07:27.440352 1420792 system_pods.go:89] "snapshot-controller-58dbcc7b99-b6zts" [f8a78017-f415-4735-b712-b1364b10828f] Running
	I0131 02:07:27.440360 1420792 system_pods.go:89] "snapshot-controller-58dbcc7b99-s9n8s" [1efb494d-0076-4791-b709-aa62eb586394] Running
	I0131 02:07:27.440366 1420792 system_pods.go:89] "storage-provisioner" [c1d21716-a761-4474-9dd0-894af6207a1f] Running
	I0131 02:07:27.440373 1420792 system_pods.go:89] "tiller-deploy-7b677967b9-9cnkj" [9ce39066-3c40-4a16-bb14-914a5acdcf78] Running
	I0131 02:07:27.440379 1420792 system_pods.go:126] duration metric: took 8.921672ms to wait for k8s-apps to be running ...
	I0131 02:07:27.440385 1420792 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 02:07:27.440436 1420792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:07:27.480947 1420792 system_svc.go:56] duration metric: took 40.54684ms WaitForService to wait for kubelet.
	I0131 02:07:27.480988 1420792 kubeadm.go:581] duration metric: took 1m23.80142104s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 02:07:27.481019 1420792 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:07:27.484363 1420792 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:07:27.484396 1420792 node_conditions.go:123] node cpu capacity is 2
	I0131 02:07:27.484409 1420792 node_conditions.go:105] duration metric: took 3.385106ms to run NodePressure ...
	I0131 02:07:27.484422 1420792 start.go:228] waiting for startup goroutines ...
	I0131 02:07:27.540766 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:27.599484 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:27.771515 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:28.040909 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:28.089729 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:28.275454 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:28.540452 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:28.590503 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:28.772405 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:29.040879 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:29.090436 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:29.272491 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:29.540699 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:29.590668 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:29.771823 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:30.044424 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:30.090104 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:30.274368 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:30.564131 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:30.593628 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:30.771811 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:31.041532 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:31.090609 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:31.272720 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:32.023847 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:32.034150 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:32.041782 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:32.046902 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:32.095162 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:32.272377 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:32.542912 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:32.592285 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:32.780588 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:33.039915 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:33.090533 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:33.271521 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:33.540645 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:33.590182 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:33.773519 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:34.041154 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:34.090923 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:34.274127 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:34.541505 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:34.592112 1420792 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0131 02:07:34.772452 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:35.041026 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:35.090060 1420792 kapi.go:107] duration metric: took 1m23.005031052s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0131 02:07:35.271946 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:35.926741 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:35.935339 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:36.041364 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:36.286633 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:36.540252 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:36.772384 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:37.042131 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:37.272437 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:37.544850 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:37.772588 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:38.041741 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:38.272034 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:38.540662 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:38.773671 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:39.040667 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:39.272204 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:39.541213 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:39.772421 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:40.043022 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:40.271720 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:40.540108 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:40.772452 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:41.042068 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:41.273788 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:41.540217 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0131 02:07:41.771947 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:42.041440 1420792 kapi.go:107] duration metric: took 1m29.507037323s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0131 02:07:42.272380 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:42.771826 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:43.273050 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:43.772312 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:44.272431 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:44.771642 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:45.271529 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:45.771827 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:46.272001 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:46.771850 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:47.271708 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:47.786520 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:48.278352 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:48.775867 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:49.271838 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:49.773802 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:50.272267 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:50.772715 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:51.272492 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:51.772596 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:52.271899 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:52.776959 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:53.271811 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:53.772205 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:54.272173 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:54.772052 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:55.272268 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:55.772479 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:56.272366 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:56.772171 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:57.272611 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:57.771666 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:58.272691 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:58.773417 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:59.272739 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:07:59.771423 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:00.272111 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:00.772253 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:01.272032 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:01.772054 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:02.272923 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:02.772538 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:03.271740 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:03.772348 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:04.272340 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:04.771973 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:05.272174 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:05.772382 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:06.275190 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:06.772355 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:07.271956 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:07.771832 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:08.272238 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:08.770857 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:09.272500 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:09.772432 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:10.272772 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:10.771692 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:11.544137 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:11.771542 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:12.271388 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:12.772295 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:13.271793 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:13.772214 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:14.274880 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:14.771723 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:15.271577 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:15.771431 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:16.272611 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:16.771854 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:17.272544 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:17.771277 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:18.272438 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:18.771628 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:19.272112 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:19.772596 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:20.271514 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:20.771152 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:21.272045 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:21.772000 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:22.271918 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:22.772051 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:23.271961 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:23.772592 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:24.271729 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:24.771469 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:25.271622 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:25.771575 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:26.271383 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:26.773036 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:27.271907 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:27.771780 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:28.272574 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:28.771641 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:29.271649 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:29.772275 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:30.271952 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:30.771951 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:31.271791 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:31.772594 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:32.271610 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:32.771379 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:33.272201 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:33.771744 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:34.271287 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:34.772485 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:35.273400 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:35.772499 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:36.272498 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:36.772394 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:37.272724 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:37.772101 1420792 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0131 02:08:38.273118 1420792 kapi.go:107] duration metric: took 2m22.505491891s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0131 02:08:38.274810 1420792 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-165032 cluster.
	I0131 02:08:38.276218 1420792 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0131 02:08:38.277715 1420792 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0131 02:08:38.279341 1420792 out.go:177] * Enabled addons: nvidia-device-plugin, metrics-server, cloud-spanner, storage-provisioner, inspektor-gadget, helm-tiller, yakd, ingress-dns, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0131 02:08:38.280868 1420792 addons.go:505] enable addons completed in 2m35.134744409s: enabled=[nvidia-device-plugin metrics-server cloud-spanner storage-provisioner inspektor-gadget helm-tiller yakd ingress-dns default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0131 02:08:38.280915 1420792 start.go:233] waiting for cluster config update ...
	I0131 02:08:38.280935 1420792 start.go:242] writing updated cluster config ...
	I0131 02:08:38.281254 1420792 ssh_runner.go:195] Run: rm -f paused
	I0131 02:08:38.334445 1420792 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 02:08:38.336494 1420792 out.go:177] * Done! kubectl is now configured to use "addons-165032" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 02:05:18 UTC, ends at Wed 2024-01-31 02:11:31 UTC. --
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.282147809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667091282133186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575392,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=12026f7f-b7c4-4bf5-9664-868511f58c03 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.282663919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=12db371e-7ec6-4e74-96a5-c25e1da5e01f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.282718881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=12db371e-7ec6-4e74-96a5-c25e1da5e01f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.283200542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a9967d47743b28f39b506fb3da3dd376c3328aa3a3cb0886b50304b68a53cc5,PodSandboxId:6bc4863942bd97c6098b948f08acf4c7cb0b5888ec3c6d8b94c07fae87a7448f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667083486345658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-r98nx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016c92f-4591-40ab-a2b9-a99293d51a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 11cf6baa,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5da450ccd20007385e99b39afeccf7800aaf7fcf27335e19d470a4b1c6a96a7,PodSandboxId:a97d13a21fad9b4af6951dc2b9efd01ca64f68d830db927f4b391968aeb60d51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706666965098392582,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-gzz44,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8bea9b65-ee95-41ee-aab6-9f15286c153a,},An
notations:map[string]string{io.kubernetes.container.hash: 473cf5ba,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef72f8abdff78de803241daa9c9302411f20658975181a0648b95ac6fc3d80f6,PodSandboxId:7b098b4e08cc83c4ca928a68b34ecc1c6ad916c06a8023381d1833a60addecd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706666942656932224,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 312711e8-169b-4200-9f42-4d5db594ed06,},Annotations:map[string]string{io.kubernetes.container.hash: e6aff8d9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6007acbac512e6de7d3b07be0a066855fba8397a963c8b5f23d3767f497a34ea,PodSandboxId:e8e92d63ac342b2eefc779203a14bff7183a1acdb04837e2851d027ab264ada6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706666917684549555,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-6jb4k,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d0cbb9e4-56b2-4187-ac68-3acf8aca77cb,},Annotations:map[string]string{io.kubernetes.container.hash: 80104855,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07692b3717186734c700d811ce1fd4bf4ea6e9796b32ba60f9c6c3938f37724,PodSandboxId:1e9f907a7234bf4654aa0427b7dc5549ec7ad8ddd50238b302e57dc1a138ca25,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666840113913657,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 43c04248-60ad-426f-b149-9d3f2183e8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7be9b7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:388713b1026ab79d863084d5b0b75f6d46366fd68c8c4153dd2cd67f8de1db17,PodSandboxId:1c2987854552b9a00309d3fcb47bc49e08853ae16ab9f1e08cadff1154a25f91,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706666839964754005,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-gkm2g,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0f2da0cb-4a97-4983-b0b8-914be6cb0da9,},Annotations:map[string]string{io.kubernetes.container.hash: c2410c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a93ee70fb392503459dd5c4799cbad05fd6fc400f267509e74b05a278a2375,PodSandboxId:331e083d1ed74a6d501bcd61148b2ce0ef32e190274f1013df355d9d0f418480,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666814777154381,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-42pd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3d6fc0-8c94-4020-a499-a1eed1c3517d,},Annotations:map[string]string{io.kubernetes.container.hash: 743db6c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04a12eb31dbf1aaf973825f803e1d05a9d096deface715f85efc50a74ebd8203,PodSandboxId:3c893369bdd97bdba0ccc932ab5e92627fce2f619b220b12a9d626a2f24e9fde,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706666784406736976,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-jbprw,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: be7581cf-1640-4563-b94c-21907584e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 6a726182,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4e20d7a1bbe55a3dc0024fc037e2f4ae88ad7c644d5c90fa4248114a91d0e0,PodSandboxId:9fc0324d6aba63febfffb4116f2fe8fe41d6925a910688ed6d2df113675c15e0,Metadata:&ContainerMetadata{Name:storage-provisioner,A
ttempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706666782220041974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d21716-a761-4474-9dd0-894af6207a1f,},Annotations:map[string]string{io.kubernetes.container.hash: f1579cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f,PodSandboxId:4b8c98c131e700566467b344d3c2d25f81c7efdbb642073449660c1358e44a2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706666769637950692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88dcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f201fca4-a9ad-4785-aba1-79c3071c7ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae50139,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71,PodSandboxId:e2335a0376cfc283defbe06ace1a660bd4e38393414c7a20266f776c06b615a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89f
d173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706666769846398699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fw9nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d27c57-51aa-4f6a-8b19-ec851733caa4,},Annotations:map[string]string{io.kubernetes.container.hash: 270e0dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e08ce5cf98f4c8489eca9f55a28822eb2d1
8e8f9cb00bc777942176a8c31a77,PodSandboxId:18bf6e8f2fa449d34bdbae398fca0aaf47dcf27bbef5c6ae22abd568625426da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706666743398759946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4532fd68e0e780ca31359e4bdf29eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2fad
dcdf7ba846814,PodSandboxId:e7513359ba3e326d27bdab00f3002cb4706a64a9adcefdbd679b142f98676f25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706666743133083693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7473a9289e15f4af5ad01370ff41295a,},Annotations:map[string]string{io.kubernetes.container.hash: 6079ba75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c,PodSandboxId:5127990c037c19b370306ad2f1ee
80159e0ff15443567bb3f8f78cbce5f0203d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706666742987419929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5954e168d53a83b7f712457190be3064,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b,PodSandboxId:e302c8c
4a5655b7b57df4ce5db783e0fed31c2acb686beca56b5a51e11a47cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706666742949721565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ad1fe0241e339c4b0888b43dd847d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=12db371e-7ec6-4e74-96a5-c25e1da5e01f name=/runtime.v1.RuntimeService
/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.320735392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=678049d2-6fa5-4696-bbc2-8d7e6867f9ef name=/runtime.v1.RuntimeService/Version
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.320815584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=678049d2-6fa5-4696-bbc2-8d7e6867f9ef name=/runtime.v1.RuntimeService/Version
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.322571110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=937564e6-cc72-4253-99e1-89acea897bfb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.323804281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667091323788365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575392,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=937564e6-cc72-4253-99e1-89acea897bfb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.324529729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=518b7797-b28f-4766-86c5-096d2ac6de25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.324589160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=518b7797-b28f-4766-86c5-096d2ac6de25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.325004624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a9967d47743b28f39b506fb3da3dd376c3328aa3a3cb0886b50304b68a53cc5,PodSandboxId:6bc4863942bd97c6098b948f08acf4c7cb0b5888ec3c6d8b94c07fae87a7448f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667083486345658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-r98nx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016c92f-4591-40ab-a2b9-a99293d51a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 11cf6baa,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5da450ccd20007385e99b39afeccf7800aaf7fcf27335e19d470a4b1c6a96a7,PodSandboxId:a97d13a21fad9b4af6951dc2b9efd01ca64f68d830db927f4b391968aeb60d51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706666965098392582,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-gzz44,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8bea9b65-ee95-41ee-aab6-9f15286c153a,},An
notations:map[string]string{io.kubernetes.container.hash: 473cf5ba,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef72f8abdff78de803241daa9c9302411f20658975181a0648b95ac6fc3d80f6,PodSandboxId:7b098b4e08cc83c4ca928a68b34ecc1c6ad916c06a8023381d1833a60addecd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706666942656932224,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 312711e8-169b-4200-9f42-4d5db594ed06,},Annotations:map[string]string{io.kubernetes.container.hash: e6aff8d9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6007acbac512e6de7d3b07be0a066855fba8397a963c8b5f23d3767f497a34ea,PodSandboxId:e8e92d63ac342b2eefc779203a14bff7183a1acdb04837e2851d027ab264ada6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706666917684549555,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-6jb4k,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d0cbb9e4-56b2-4187-ac68-3acf8aca77cb,},Annotations:map[string]string{io.kubernetes.container.hash: 80104855,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07692b3717186734c700d811ce1fd4bf4ea6e9796b32ba60f9c6c3938f37724,PodSandboxId:1e9f907a7234bf4654aa0427b7dc5549ec7ad8ddd50238b302e57dc1a138ca25,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666840113913657,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 43c04248-60ad-426f-b149-9d3f2183e8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7be9b7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:388713b1026ab79d863084d5b0b75f6d46366fd68c8c4153dd2cd67f8de1db17,PodSandboxId:1c2987854552b9a00309d3fcb47bc49e08853ae16ab9f1e08cadff1154a25f91,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706666839964754005,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-gkm2g,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0f2da0cb-4a97-4983-b0b8-914be6cb0da9,},Annotations:map[string]string{io.kubernetes.container.hash: c2410c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a93ee70fb392503459dd5c4799cbad05fd6fc400f267509e74b05a278a2375,PodSandboxId:331e083d1ed74a6d501bcd61148b2ce0ef32e190274f1013df355d9d0f418480,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666814777154381,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-42pd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3d6fc0-8c94-4020-a499-a1eed1c3517d,},Annotations:map[string]string{io.kubernetes.container.hash: 743db6c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04a12eb31dbf1aaf973825f803e1d05a9d096deface715f85efc50a74ebd8203,PodSandboxId:3c893369bdd97bdba0ccc932ab5e92627fce2f619b220b12a9d626a2f24e9fde,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706666784406736976,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-jbprw,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: be7581cf-1640-4563-b94c-21907584e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 6a726182,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4e20d7a1bbe55a3dc0024fc037e2f4ae88ad7c644d5c90fa4248114a91d0e0,PodSandboxId:9fc0324d6aba63febfffb4116f2fe8fe41d6925a910688ed6d2df113675c15e0,Metadata:&ContainerMetadata{Name:storage-provisioner,A
ttempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706666782220041974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d21716-a761-4474-9dd0-894af6207a1f,},Annotations:map[string]string{io.kubernetes.container.hash: f1579cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f,PodSandboxId:4b8c98c131e700566467b344d3c2d25f81c7efdbb642073449660c1358e44a2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706666769637950692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88dcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f201fca4-a9ad-4785-aba1-79c3071c7ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae50139,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71,PodSandboxId:e2335a0376cfc283defbe06ace1a660bd4e38393414c7a20266f776c06b615a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89f
d173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706666769846398699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fw9nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d27c57-51aa-4f6a-8b19-ec851733caa4,},Annotations:map[string]string{io.kubernetes.container.hash: 270e0dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e08ce5cf98f4c8489eca9f55a28822eb2d1
8e8f9cb00bc777942176a8c31a77,PodSandboxId:18bf6e8f2fa449d34bdbae398fca0aaf47dcf27bbef5c6ae22abd568625426da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706666743398759946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4532fd68e0e780ca31359e4bdf29eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2fad
dcdf7ba846814,PodSandboxId:e7513359ba3e326d27bdab00f3002cb4706a64a9adcefdbd679b142f98676f25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706666743133083693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7473a9289e15f4af5ad01370ff41295a,},Annotations:map[string]string{io.kubernetes.container.hash: 6079ba75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c,PodSandboxId:5127990c037c19b370306ad2f1ee
80159e0ff15443567bb3f8f78cbce5f0203d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706666742987419929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5954e168d53a83b7f712457190be3064,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b,PodSandboxId:e302c8c
4a5655b7b57df4ce5db783e0fed31c2acb686beca56b5a51e11a47cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706666742949721565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ad1fe0241e339c4b0888b43dd847d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=518b7797-b28f-4766-86c5-096d2ac6de25 name=/runtime.v1.RuntimeService
/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.359728304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dea7a52d-b1c5-4e74-940c-b4e32b0a5de1 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.359821225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dea7a52d-b1c5-4e74-940c-b4e32b0a5de1 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.360834239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1cd1cf4e-4d7b-4675-a46c-be9ef4ae82e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.362135397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667091362119968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575392,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=1cd1cf4e-4d7b-4675-a46c-be9ef4ae82e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.362624179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8731f838-f334-4d5f-a1b7-c4ad8922afa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.362671991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8731f838-f334-4d5f-a1b7-c4ad8922afa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.363115753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a9967d47743b28f39b506fb3da3dd376c3328aa3a3cb0886b50304b68a53cc5,PodSandboxId:6bc4863942bd97c6098b948f08acf4c7cb0b5888ec3c6d8b94c07fae87a7448f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667083486345658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-r98nx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016c92f-4591-40ab-a2b9-a99293d51a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 11cf6baa,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5da450ccd20007385e99b39afeccf7800aaf7fcf27335e19d470a4b1c6a96a7,PodSandboxId:a97d13a21fad9b4af6951dc2b9efd01ca64f68d830db927f4b391968aeb60d51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706666965098392582,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-gzz44,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8bea9b65-ee95-41ee-aab6-9f15286c153a,},An
notations:map[string]string{io.kubernetes.container.hash: 473cf5ba,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef72f8abdff78de803241daa9c9302411f20658975181a0648b95ac6fc3d80f6,PodSandboxId:7b098b4e08cc83c4ca928a68b34ecc1c6ad916c06a8023381d1833a60addecd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706666942656932224,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 312711e8-169b-4200-9f42-4d5db594ed06,},Annotations:map[string]string{io.kubernetes.container.hash: e6aff8d9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6007acbac512e6de7d3b07be0a066855fba8397a963c8b5f23d3767f497a34ea,PodSandboxId:e8e92d63ac342b2eefc779203a14bff7183a1acdb04837e2851d027ab264ada6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706666917684549555,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-6jb4k,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d0cbb9e4-56b2-4187-ac68-3acf8aca77cb,},Annotations:map[string]string{io.kubernetes.container.hash: 80104855,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07692b3717186734c700d811ce1fd4bf4ea6e9796b32ba60f9c6c3938f37724,PodSandboxId:1e9f907a7234bf4654aa0427b7dc5549ec7ad8ddd50238b302e57dc1a138ca25,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666840113913657,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 43c04248-60ad-426f-b149-9d3f2183e8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7be9b7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:388713b1026ab79d863084d5b0b75f6d46366fd68c8c4153dd2cd67f8de1db17,PodSandboxId:1c2987854552b9a00309d3fcb47bc49e08853ae16ab9f1e08cadff1154a25f91,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706666839964754005,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-gkm2g,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0f2da0cb-4a97-4983-b0b8-914be6cb0da9,},Annotations:map[string]string{io.kubernetes.container.hash: c2410c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a93ee70fb392503459dd5c4799cbad05fd6fc400f267509e74b05a278a2375,PodSandboxId:331e083d1ed74a6d501bcd61148b2ce0ef32e190274f1013df355d9d0f418480,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666814777154381,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-42pd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3d6fc0-8c94-4020-a499-a1eed1c3517d,},Annotations:map[string]string{io.kubernetes.container.hash: 743db6c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04a12eb31dbf1aaf973825f803e1d05a9d096deface715f85efc50a74ebd8203,PodSandboxId:3c893369bdd97bdba0ccc932ab5e92627fce2f619b220b12a9d626a2f24e9fde,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706666784406736976,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-jbprw,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: be7581cf-1640-4563-b94c-21907584e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 6a726182,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4e20d7a1bbe55a3dc0024fc037e2f4ae88ad7c644d5c90fa4248114a91d0e0,PodSandboxId:9fc0324d6aba63febfffb4116f2fe8fe41d6925a910688ed6d2df113675c15e0,Metadata:&ContainerMetadata{Name:storage-provisioner,A
ttempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706666782220041974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d21716-a761-4474-9dd0-894af6207a1f,},Annotations:map[string]string{io.kubernetes.container.hash: f1579cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f,PodSandboxId:4b8c98c131e700566467b344d3c2d25f81c7efdbb642073449660c1358e44a2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706666769637950692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88dcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f201fca4-a9ad-4785-aba1-79c3071c7ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae50139,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71,PodSandboxId:e2335a0376cfc283defbe06ace1a660bd4e38393414c7a20266f776c06b615a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89f
d173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706666769846398699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fw9nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d27c57-51aa-4f6a-8b19-ec851733caa4,},Annotations:map[string]string{io.kubernetes.container.hash: 270e0dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e08ce5cf98f4c8489eca9f55a28822eb2d1
8e8f9cb00bc777942176a8c31a77,PodSandboxId:18bf6e8f2fa449d34bdbae398fca0aaf47dcf27bbef5c6ae22abd568625426da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706666743398759946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4532fd68e0e780ca31359e4bdf29eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2fad
dcdf7ba846814,PodSandboxId:e7513359ba3e326d27bdab00f3002cb4706a64a9adcefdbd679b142f98676f25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706666743133083693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7473a9289e15f4af5ad01370ff41295a,},Annotations:map[string]string{io.kubernetes.container.hash: 6079ba75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c,PodSandboxId:5127990c037c19b370306ad2f1ee
80159e0ff15443567bb3f8f78cbce5f0203d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706666742987419929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5954e168d53a83b7f712457190be3064,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b,PodSandboxId:e302c8c
4a5655b7b57df4ce5db783e0fed31c2acb686beca56b5a51e11a47cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706666742949721565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ad1fe0241e339c4b0888b43dd847d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8731f838-f334-4d5f-a1b7-c4ad8922afa4 name=/runtime.v1.RuntimeService
/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.399499662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=990b537c-6bd4-4ff1-8b56-443be5d2bd92 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.399562789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=990b537c-6bd4-4ff1-8b56-443be5d2bd92 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.400704234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1d57ec7a-2a48-4ce3-abe9-0cdfb7d8cea6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.402162101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667091402146808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575392,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=1d57ec7a-2a48-4ce3-abe9-0cdfb7d8cea6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.402738474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c538c2fd-9773-4a41-b805-28550fa921b2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.402806090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c538c2fd-9773-4a41-b805-28550fa921b2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:11:31 addons-165032 crio[711]: time="2024-01-31 02:11:31.403467241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4a9967d47743b28f39b506fb3da3dd376c3328aa3a3cb0886b50304b68a53cc5,PodSandboxId:6bc4863942bd97c6098b948f08acf4c7cb0b5888ec3c6d8b94c07fae87a7448f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667083486345658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-r98nx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1016c92f-4591-40ab-a2b9-a99293d51a0f,},Annotations:map[string]string{io.kubernetes.container.hash: 11cf6baa,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5da450ccd20007385e99b39afeccf7800aaf7fcf27335e19d470a4b1c6a96a7,PodSandboxId:a97d13a21fad9b4af6951dc2b9efd01ca64f68d830db927f4b391968aeb60d51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706666965098392582,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-gzz44,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 8bea9b65-ee95-41ee-aab6-9f15286c153a,},An
notations:map[string]string{io.kubernetes.container.hash: 473cf5ba,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef72f8abdff78de803241daa9c9302411f20658975181a0648b95ac6fc3d80f6,PodSandboxId:7b098b4e08cc83c4ca928a68b34ecc1c6ad916c06a8023381d1833a60addecd9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706666942656932224,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 312711e8-169b-4200-9f42-4d5db594ed06,},Annotations:map[string]string{io.kubernetes.container.hash: e6aff8d9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6007acbac512e6de7d3b07be0a066855fba8397a963c8b5f23d3767f497a34ea,PodSandboxId:e8e92d63ac342b2eefc779203a14bff7183a1acdb04837e2851d027ab264ada6,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706666917684549555,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-6jb4k,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d0cbb9e4-56b2-4187-ac68-3acf8aca77cb,},Annotations:map[string]string{io.kubernetes.container.hash: 80104855,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a07692b3717186734c700d811ce1fd4bf4ea6e9796b32ba60f9c6c3938f37724,PodSandboxId:1e9f907a7234bf4654aa0427b7dc5549ec7ad8ddd50238b302e57dc1a138ca25,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666840113913657,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ztd5s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 43c04248-60ad-426f-b149-9d3f2183e8c9,},Annotations:map[string]string{io.kubernetes.container.hash: 7be9b7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:388713b1026ab79d863084d5b0b75f6d46366fd68c8c4153dd2cd67f8de1db17,PodSandboxId:1c2987854552b9a00309d3fcb47bc49e08853ae16ab9f1e08cadff1154a25f91,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sh
a256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1706666839964754005,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-gkm2g,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 0f2da0cb-4a97-4983-b0b8-914be6cb0da9,},Annotations:map[string]string{io.kubernetes.container.hash: c2410c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17a93ee70fb392503459dd5c4799cbad05fd6fc400f267509e74b05a278a2375,PodSandboxId:331e083d1ed74a6d501bcd61148b2ce0ef32e190274f1013df355d9d0f418480,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706666814777154381,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-42pd8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c3d6fc0-8c94-4020-a499-a1eed1c3517d,},Annotations:map[string]string{io.kubernetes.container.hash: 743db6c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04a12eb31dbf1aaf973825f803e1d05a9d096deface715f85efc50a74ebd8203,PodSandboxId:3c893369bdd97bdba0ccc932ab5e92627fce2f619b220b12a9d626a2f24e9fde,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e
727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706666784406736976,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-jbprw,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: be7581cf-1640-4563-b94c-21907584e24d,},Annotations:map[string]string{io.kubernetes.container.hash: 6a726182,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4e20d7a1bbe55a3dc0024fc037e2f4ae88ad7c644d5c90fa4248114a91d0e0,PodSandboxId:9fc0324d6aba63febfffb4116f2fe8fe41d6925a910688ed6d2df113675c15e0,Metadata:&ContainerMetadata{Name:storage-provisioner,A
ttempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706666782220041974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d21716-a761-4474-9dd0-894af6207a1f,},Annotations:map[string]string{io.kubernetes.container.hash: f1579cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f,PodSandboxId:4b8c98c131e700566467b344d3c2d25f81c7efdbb642073449660c1358e44a2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,}
,Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706666769637950692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88dcq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f201fca4-a9ad-4785-aba1-79c3071c7ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 5ae50139,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71,PodSandboxId:e2335a0376cfc283defbe06ace1a660bd4e38393414c7a20266f776c06b615a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89f
d173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706666769846398699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fw9nr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60d27c57-51aa-4f6a-8b19-ec851733caa4,},Annotations:map[string]string{io.kubernetes.container.hash: 270e0dd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e08ce5cf98f4c8489eca9f55a28822eb2d1
8e8f9cb00bc777942176a8c31a77,PodSandboxId:18bf6e8f2fa449d34bdbae398fca0aaf47dcf27bbef5c6ae22abd568625426da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706666743398759946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4532fd68e0e780ca31359e4bdf29eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2fad
dcdf7ba846814,PodSandboxId:e7513359ba3e326d27bdab00f3002cb4706a64a9adcefdbd679b142f98676f25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706666743133083693,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7473a9289e15f4af5ad01370ff41295a,},Annotations:map[string]string{io.kubernetes.container.hash: 6079ba75,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c,PodSandboxId:5127990c037c19b370306ad2f1ee
80159e0ff15443567bb3f8f78cbce5f0203d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706666742987419929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5954e168d53a83b7f712457190be3064,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b,PodSandboxId:e302c8c
4a5655b7b57df4ce5db783e0fed31c2acb686beca56b5a51e11a47cd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706666742949721565,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-165032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79ad1fe0241e339c4b0888b43dd847d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c538c2fd-9773-4a41-b805-28550fa921b2 name=/runtime.v1.RuntimeService
/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4a9967d47743b       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   6bc4863942bd9       hello-world-app-5d77478584-r98nx
	f5da450ccd200       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   a97d13a21fad9       headlamp-7ddfbb94ff-gzz44
	ef72f8abdff78       docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da                              2 minutes ago       Running             nginx                     0                   7b098b4e08cc8       nginx
	6007acbac512e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   e8e92d63ac342       gcp-auth-d4c87556c-6jb4k
	a07692b371718       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              patch                     0                   1e9f907a7234b       ingress-nginx-admission-patch-ztd5s
	388713b1026ab       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   1c2987854552b       local-path-provisioner-78b46b4d5c-gkm2g
	17a93ee70fb39       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   331e083d1ed74       ingress-nginx-admission-create-42pd8
	04a12eb31dbf1       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   3c893369bdd97       yakd-dashboard-9947fc6bf-jbprw
	0a4e20d7a1bbe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   9fc0324d6aba6       storage-provisioner
	4592202d5c39e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   e2335a0376cfc       coredns-5dd5756b68-fw9nr
	ae80e4c805dcf       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   4b8c98c131e70       kube-proxy-88dcq
	7e08ce5cf98f4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   18bf6e8f2fa44       kube-scheduler-addons-165032
	e4653c7142c62       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   e7513359ba3e3       etcd-addons-165032
	a56a37d6bfab2       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   5127990c037c1       kube-controller-manager-addons-165032
	4024c2fb49df3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   e302c8c4a5655       kube-apiserver-addons-165032
	
	
	==> coredns [4592202d5c39e639cf9b0804e0d9442a5b500a06739a4bd066b7acfae0f9ae71] <==
	[INFO] 10.244.0.9:49178 - 38776 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039772s
	[INFO] 10.244.0.9:46282 - 9942 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058301s
	[INFO] 10.244.0.9:46282 - 31450 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040429s
	[INFO] 10.244.0.9:50543 - 30235 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003208s
	[INFO] 10.244.0.9:50543 - 38942 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036278s
	[INFO] 10.244.0.9:52511 - 25811 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000033774s
	[INFO] 10.244.0.9:52511 - 26321 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084136s
	[INFO] 10.244.0.9:42548 - 9383 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086092s
	[INFO] 10.244.0.9:42548 - 22688 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154077s
	[INFO] 10.244.0.9:45670 - 58663 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028427s
	[INFO] 10.244.0.9:45670 - 61733 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073s
	[INFO] 10.244.0.9:40176 - 26649 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028543s
	[INFO] 10.244.0.9:40176 - 42264 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00018119s
	[INFO] 10.244.0.9:49749 - 57279 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00002907s
	[INFO] 10.244.0.9:49749 - 22718 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000031873s
	[INFO] 10.244.0.21:50406 - 27036 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000453694s
	[INFO] 10.244.0.21:48055 - 34568 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090829s
	[INFO] 10.244.0.21:48209 - 46242 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163736s
	[INFO] 10.244.0.21:57389 - 62909 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000057254s
	[INFO] 10.244.0.21:39670 - 17695 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088019s
	[INFO] 10.244.0.21:35118 - 13955 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072397s
	[INFO] 10.244.0.21:47093 - 1873 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000380632s
	[INFO] 10.244.0.21:60894 - 23796 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000762086s
	[INFO] 10.244.0.24:35875 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00014368s
	[INFO] 10.244.0.24:51287 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00011993s
	
	
	==> describe nodes <==
	Name:               addons-165032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-165032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=addons-165032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T02_05_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-165032
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 02:05:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-165032
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 02:11:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 02:09:56 +0000   Wed, 31 Jan 2024 02:05:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 02:09:56 +0000   Wed, 31 Jan 2024 02:05:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 02:09:56 +0000   Wed, 31 Jan 2024 02:05:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 02:09:56 +0000   Wed, 31 Jan 2024 02:05:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    addons-165032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2e5e49bdfd34ec08f753b4eff5839ab
	  System UUID:                c2e5e49b-dfd3-4ec0-8f75-3b4eff5839ab
	  Boot ID:                    fc5a4aa3-4f6f-4863-b49a-418533ba3c38
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-r98nx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-6jb4k                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  headlamp                    headlamp-7ddfbb94ff-gzz44                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 coredns-5dd5756b68-fw9nr                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m29s
	  kube-system                 etcd-addons-165032                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-apiserver-addons-165032               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kube-system                 kube-controller-manager-addons-165032      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-proxy-88dcq                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-scheduler-addons-165032               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  local-path-storage          local-path-provisioner-78b46b4d5c-gkm2g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-jbprw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s (x8 over 5m50s)  kubelet          Node addons-165032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x8 over 5m50s)  kubelet          Node addons-165032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x7 over 5m50s)  kubelet          Node addons-165032 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s                  kubelet          Node addons-165032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s                  kubelet          Node addons-165032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s                  kubelet          Node addons-165032 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m41s                  kubelet          Node addons-165032 status is now: NodeReady
	  Normal  RegisteredNode           5m29s                  node-controller  Node addons-165032 event: Registered Node addons-165032 in Controller
	
	
	==> dmesg <==
	[  +0.137106] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.008884] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.769289] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.095276] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.132283] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.096433] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.187500] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[  +9.463060] systemd-fstab-generator[906]: Ignoring "noauto" for root device
	[  +8.756167] systemd-fstab-generator[1237]: Ignoring "noauto" for root device
	[Jan31 02:06] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.243573] kauditd_printk_skb: 49 callbacks suppressed
	[  +8.736826] kauditd_printk_skb: 24 callbacks suppressed
	[ +28.167778] kauditd_printk_skb: 18 callbacks suppressed
	[Jan31 02:07] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.075496] kauditd_printk_skb: 18 callbacks suppressed
	[Jan31 02:08] kauditd_printk_skb: 22 callbacks suppressed
	[ +14.857880] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.364820] kauditd_printk_skb: 12 callbacks suppressed
	[Jan31 02:09] kauditd_printk_skb: 30 callbacks suppressed
	[ +25.007731] kauditd_printk_skb: 43 callbacks suppressed
	[ +28.148444] kauditd_printk_skb: 12 callbacks suppressed
	[Jan31 02:11] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e4653c7142c62eed13551b63738b0d0546e57ccaacc26de2faddcdf7ba846814] <==
	{"level":"warn","ts":"2024-01-31T02:07:32.025905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-31T02:07:31.668582Z","time spent":"357.314462ms","remote":"127.0.0.1:41750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":1,"response size":29,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true "}
	{"level":"info","ts":"2024-01-31T02:07:35.908363Z","caller":"traceutil/trace.go:171","msg":"trace[1676374312] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"472.592692ms","start":"2024-01-31T02:07:35.435758Z","end":"2024-01-31T02:07:35.90835Z","steps":["trace[1676374312] 'process raft request'  (duration: 472.515022ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:07:35.908617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-31T02:07:35.435739Z","time spent":"472.782185ms","remote":"127.0.0.1:41696","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<>"}
	{"level":"info","ts":"2024-01-31T02:07:35.909017Z","caller":"traceutil/trace.go:171","msg":"trace[421834556] linearizableReadLoop","detail":"{readStateIndex:1186; appliedIndex:1186; }","duration":"375.93987ms","start":"2024-01-31T02:07:35.533068Z","end":"2024-01-31T02:07:35.909008Z","steps":["trace[421834556] 'read index received'  (duration: 375.936448ms)","trace[421834556] 'applied index is now lower than readState.Index'  (duration: 2.748µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-31T02:07:35.909273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.21035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82624"}
	{"level":"info","ts":"2024-01-31T02:07:35.91067Z","caller":"traceutil/trace.go:171","msg":"trace[25875445] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1143; }","duration":"377.572284ms","start":"2024-01-31T02:07:35.533046Z","end":"2024-01-31T02:07:35.910618Z","steps":["trace[25875445] 'agreement among raft nodes before linearized reading'  (duration: 376.043653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:07:35.910751Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-31T02:07:35.533032Z","time spent":"377.698137ms","remote":"127.0.0.1:41676","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":82646,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-01-31T02:07:35.914614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.906728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10951"}
	{"level":"info","ts":"2024-01-31T02:07:35.914733Z","caller":"traceutil/trace.go:171","msg":"trace[1333780096] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1144; }","duration":"148.057458ms","start":"2024-01-31T02:07:35.766667Z","end":"2024-01-31T02:07:35.914724Z","steps":["trace[1333780096] 'agreement among raft nodes before linearized reading'  (duration: 147.804739ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-31T02:07:35.914995Z","caller":"traceutil/trace.go:171","msg":"trace[479003406] transaction","detail":"{read_only:false; response_revision:1144; number_of_response:1; }","duration":"370.468309ms","start":"2024-01-31T02:07:35.544519Z","end":"2024-01-31T02:07:35.914987Z","steps":["trace[479003406] 'process raft request'  (duration: 369.82488ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:07:35.915087Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-31T02:07:35.544503Z","time spent":"370.539126ms","remote":"127.0.0.1:41650","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-69cff4fd79-zcp24.17af4c767a8aa326\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-69cff4fd79-zcp24.17af4c767a8aa326\" value_size:675 lease:5722541708237552659 >> failure:<>"}
	{"level":"warn","ts":"2024-01-31T02:08:11.536219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.231785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2024-01-31T02:08:11.536988Z","caller":"traceutil/trace.go:171","msg":"trace[1178309004] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1243; }","duration":"271.006232ms","start":"2024-01-31T02:08:11.265967Z","end":"2024-01-31T02:08:11.536973Z","steps":["trace[1178309004] 'range keys from in-memory index tree'  (duration: 270.162619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:08:11.536348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.634062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-01-31T02:08:11.537757Z","caller":"traceutil/trace.go:171","msg":"trace[1343198394] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1243; }","duration":"191.039206ms","start":"2024-01-31T02:08:11.346707Z","end":"2024-01-31T02:08:11.537747Z","steps":["trace[1343198394] 'range keys from in-memory index tree'  (duration: 189.399231ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-31T02:08:50.288235Z","caller":"traceutil/trace.go:171","msg":"trace[1806668944] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1356; }","duration":"104.737747ms","start":"2024-01-31T02:08:50.183485Z","end":"2024-01-31T02:08:50.288223Z","steps":["trace[1806668944] 'process raft request'  (duration: 104.638389ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-31T02:09:08.989807Z","caller":"traceutil/trace.go:171","msg":"trace[91831627] transaction","detail":"{read_only:false; response_revision:1530; number_of_response:1; }","duration":"376.586559ms","start":"2024-01-31T02:09:08.61321Z","end":"2024-01-31T02:09:08.989797Z","steps":["trace[91831627] 'process raft request'  (duration: 376.501004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:09:08.990028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-31T02:09:08.613193Z","time spent":"376.772286ms","remote":"127.0.0.1:41672","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1514 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-31T02:09:08.990341Z","caller":"traceutil/trace.go:171","msg":"trace[721353057] linearizableReadLoop","detail":"{readStateIndex:1600; appliedIndex:1600; }","duration":"274.52297ms","start":"2024-01-31T02:09:08.715811Z","end":"2024-01-31T02:09:08.990333Z","steps":["trace[721353057] 'read index received'  (duration: 274.520857ms)","trace[721353057] 'applied index is now lower than readState.Index'  (duration: 1.676µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-31T02:09:08.990445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.269466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.232\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-01-31T02:09:08.990486Z","caller":"traceutil/trace.go:171","msg":"trace[680161223] range","detail":"{range_begin:/registry/masterleases/192.168.39.232; range_end:; response_count:1; response_revision:1530; }","duration":"245.315121ms","start":"2024-01-31T02:09:08.745164Z","end":"2024-01-31T02:09:08.990479Z","steps":["trace[680161223] 'agreement among raft nodes before linearized reading'  (duration: 245.24403ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:09:08.990701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.903219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2024-01-31T02:09:08.992152Z","caller":"traceutil/trace.go:171","msg":"trace[1618167288] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1530; }","duration":"276.351545ms","start":"2024-01-31T02:09:08.715787Z","end":"2024-01-31T02:09:08.992139Z","steps":["trace[1618167288] 'agreement among raft nodes before linearized reading'  (duration: 274.871046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-31T02:09:08.991397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.157711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2024-01-31T02:09:08.992557Z","caller":"traceutil/trace.go:171","msg":"trace[2080369103] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1530; }","duration":"182.325224ms","start":"2024-01-31T02:09:08.810223Z","end":"2024-01-31T02:09:08.992548Z","steps":["trace[2080369103] 'agreement among raft nodes before linearized reading'  (duration: 181.123559ms)"],"step_count":1}
	
	
	==> gcp-auth [6007acbac512e6de7d3b07be0a066855fba8397a963c8b5f23d3767f497a34ea] <==
	2024/01/31 02:08:37 GCP Auth Webhook started!
	2024/01/31 02:08:38 Ready to marshal response ...
	2024/01/31 02:08:38 Ready to write response ...
	2024/01/31 02:08:38 Ready to marshal response ...
	2024/01/31 02:08:38 Ready to write response ...
	2024/01/31 02:08:49 Ready to marshal response ...
	2024/01/31 02:08:49 Ready to write response ...
	2024/01/31 02:08:53 Ready to marshal response ...
	2024/01/31 02:08:53 Ready to write response ...
	2024/01/31 02:08:56 Ready to marshal response ...
	2024/01/31 02:08:56 Ready to write response ...
	2024/01/31 02:08:57 Ready to marshal response ...
	2024/01/31 02:08:57 Ready to write response ...
	2024/01/31 02:08:58 Ready to marshal response ...
	2024/01/31 02:08:58 Ready to write response ...
	2024/01/31 02:09:17 Ready to marshal response ...
	2024/01/31 02:09:17 Ready to write response ...
	2024/01/31 02:09:17 Ready to marshal response ...
	2024/01/31 02:09:17 Ready to write response ...
	2024/01/31 02:09:17 Ready to marshal response ...
	2024/01/31 02:09:17 Ready to write response ...
	2024/01/31 02:09:36 Ready to marshal response ...
	2024/01/31 02:09:36 Ready to write response ...
	2024/01/31 02:11:20 Ready to marshal response ...
	2024/01/31 02:11:20 Ready to write response ...
	
	
	==> kernel <==
	 02:11:31 up 6 min,  0 users,  load average: 1.27, 2.07, 1.15
	Linux addons-165032 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [4024c2fb49df3082432177ea27efa7ad538d638d9d268f86265840b1fd60845b] <==
	W0131 02:09:08.945169       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0131 02:09:14.350660       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0131 02:09:15.812314       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0131 02:09:17.083635       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.14.46"}
	I0131 02:09:37.518388       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0131 02:09:53.629928       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.630071       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.635263       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.635350       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.651372       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.651599       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.664584       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.664671       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.682653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.682728       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.688491       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.688600       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.715529       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.715594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0131 02:09:53.720580       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0131 02:09:53.720673       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0131 02:09:54.665573       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0131 02:09:54.720666       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0131 02:09:54.767748       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0131 02:11:20.528164       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.169.168"}
	
	
	==> kube-controller-manager [a56a37d6bfab2e2fb5beed719b66e82ea260178eae1c1560d5243df3f40fa59c] <==
	W0131 02:10:28.841576       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:10:28.841636       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0131 02:10:35.717982       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:10:35.718077       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0131 02:10:36.307258       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:10:36.307325       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0131 02:11:02.608960       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:11:02.609145       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0131 02:11:03.409797       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:11:03.409980       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0131 02:11:08.262157       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:11:08.262266       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0131 02:11:15.785605       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0131 02:11:15.785655       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0131 02:11:20.282903       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0131 02:11:20.315090       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-r98nx"
	I0131 02:11:20.320880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.804204ms"
	I0131 02:11:20.352556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.545867ms"
	I0131 02:11:20.366290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.58091ms"
	I0131 02:11:20.366501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="102.68µs"
	I0131 02:11:23.400090       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0131 02:11:23.416386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="7.638µs"
	I0131 02:11:23.440788       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0131 02:11:24.456913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.990252ms"
	I0131 02:11:24.457575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.522µs"
	
	
	==> kube-proxy [ae80e4c805dcf31b0af3a0fff63aeba64c33fac8fd67b63eefee4dd67d10798f] <==
	I0131 02:06:17.216963       1 server_others.go:69] "Using iptables proxy"
	I0131 02:06:17.607566       1 node.go:141] Successfully retrieved node IP: 192.168.39.232
	I0131 02:06:18.455186       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 02:06:18.455228       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 02:06:18.518823       1 server_others.go:152] "Using iptables Proxier"
	I0131 02:06:18.519010       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 02:06:18.519256       1 server.go:846] "Version info" version="v1.28.4"
	I0131 02:06:18.519403       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 02:06:18.520113       1 config.go:188] "Starting service config controller"
	I0131 02:06:18.520170       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 02:06:18.520202       1 config.go:97] "Starting endpoint slice config controller"
	I0131 02:06:18.520218       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 02:06:18.521119       1 config.go:315] "Starting node config controller"
	I0131 02:06:18.521910       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 02:06:18.621929       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 02:06:18.622033       1 shared_informer.go:318] Caches are synced for service config
	I0131 02:06:18.622182       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7e08ce5cf98f4c8489eca9f55a28822eb2d18e8f9cb00bc777942176a8c31a77] <==
	W0131 02:05:47.054608       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 02:05:47.054616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 02:05:47.056291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 02:05:47.056327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 02:05:47.056387       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 02:05:47.056420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 02:05:47.868354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 02:05:47.868400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 02:05:47.913902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 02:05:47.913947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 02:05:48.025378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 02:05:48.025422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 02:05:48.029029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0131 02:05:48.029079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0131 02:05:48.086956       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 02:05:48.087080       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 02:05:48.213666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 02:05:48.213753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0131 02:05:48.216127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 02:05:48.216196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0131 02:05:48.297476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 02:05:48.297617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0131 02:05:48.327408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 02:05:48.327497       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0131 02:05:50.233022       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 02:05:18 UTC, ends at Wed 2024-01-31 02:11:31 UTC. --
	Jan 31 02:11:20 addons-165032 kubelet[1244]: I0131 02:11:20.325740    1244 memory_manager.go:346] "RemoveStaleState removing state" podUID="130affe4-3ea3-43ff-9d25-3e8a1932fc26" containerName="node-driver-registrar"
	Jan 31 02:11:20 addons-165032 kubelet[1244]: I0131 02:11:20.325750    1244 memory_manager.go:346] "RemoveStaleState removing state" podUID="130affe4-3ea3-43ff-9d25-3e8a1932fc26" containerName="liveness-probe"
	Jan 31 02:11:20 addons-165032 kubelet[1244]: I0131 02:11:20.411967    1244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kql7q\" (UniqueName: \"kubernetes.io/projected/1016c92f-4591-40ab-a2b9-a99293d51a0f-kube-api-access-kql7q\") pod \"hello-world-app-5d77478584-r98nx\" (UID: \"1016c92f-4591-40ab-a2b9-a99293d51a0f\") " pod="default/hello-world-app-5d77478584-r98nx"
	Jan 31 02:11:20 addons-165032 kubelet[1244]: I0131 02:11:20.412127    1244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1016c92f-4591-40ab-a2b9-a99293d51a0f-gcp-creds\") pod \"hello-world-app-5d77478584-r98nx\" (UID: \"1016c92f-4591-40ab-a2b9-a99293d51a0f\") " pod="default/hello-world-app-5d77478584-r98nx"
	Jan 31 02:11:21 addons-165032 kubelet[1244]: I0131 02:11:21.620454    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prtmc\" (UniqueName: \"kubernetes.io/projected/ab656846-5129-406d-849d-6e25c96c7b4d-kube-api-access-prtmc\") pod \"ab656846-5129-406d-849d-6e25c96c7b4d\" (UID: \"ab656846-5129-406d-849d-6e25c96c7b4d\") "
	Jan 31 02:11:21 addons-165032 kubelet[1244]: I0131 02:11:21.630424    1244 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab656846-5129-406d-849d-6e25c96c7b4d-kube-api-access-prtmc" (OuterVolumeSpecName: "kube-api-access-prtmc") pod "ab656846-5129-406d-849d-6e25c96c7b4d" (UID: "ab656846-5129-406d-849d-6e25c96c7b4d"). InnerVolumeSpecName "kube-api-access-prtmc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 31 02:11:21 addons-165032 kubelet[1244]: I0131 02:11:21.721286    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-prtmc\" (UniqueName: \"kubernetes.io/projected/ab656846-5129-406d-849d-6e25c96c7b4d-kube-api-access-prtmc\") on node \"addons-165032\" DevicePath \"\""
	Jan 31 02:11:22 addons-165032 kubelet[1244]: I0131 02:11:22.409295    1244 scope.go:117] "RemoveContainer" containerID="07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313"
	Jan 31 02:11:22 addons-165032 kubelet[1244]: I0131 02:11:22.509459    1244 scope.go:117] "RemoveContainer" containerID="07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313"
	Jan 31 02:11:22 addons-165032 kubelet[1244]: E0131 02:11:22.510446    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313\": container with ID starting with 07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313 not found: ID does not exist" containerID="07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313"
	Jan 31 02:11:22 addons-165032 kubelet[1244]: I0131 02:11:22.510536    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313"} err="failed to get container status \"07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313\": rpc error: code = NotFound desc = could not find container \"07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313\": container with ID starting with 07365d26edc71d28736575984c37ccd703669f8b6ff1084fce5f3963aea60313 not found: ID does not exist"
	Jan 31 02:11:22 addons-165032 kubelet[1244]: I0131 02:11:22.699500    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ab656846-5129-406d-849d-6e25c96c7b4d" path="/var/lib/kubelet/pods/ab656846-5129-406d-849d-6e25c96c7b4d/volumes"
	Jan 31 02:11:24 addons-165032 kubelet[1244]: I0131 02:11:24.699184    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0c3d6fc0-8c94-4020-a499-a1eed1c3517d" path="/var/lib/kubelet/pods/0c3d6fc0-8c94-4020-a499-a1eed1c3517d/volumes"
	Jan 31 02:11:24 addons-165032 kubelet[1244]: I0131 02:11:24.699659    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="43c04248-60ad-426f-b149-9d3f2183e8c9" path="/var/lib/kubelet/pods/43c04248-60ad-426f-b149-9d3f2183e8c9/volumes"
	Jan 31 02:11:26 addons-165032 kubelet[1244]: I0131 02:11:26.856512    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nls2c\" (UniqueName: \"kubernetes.io/projected/64515697-6c3b-4152-a7d4-813caa32e24d-kube-api-access-nls2c\") pod \"64515697-6c3b-4152-a7d4-813caa32e24d\" (UID: \"64515697-6c3b-4152-a7d4-813caa32e24d\") "
	Jan 31 02:11:26 addons-165032 kubelet[1244]: I0131 02:11:26.856588    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64515697-6c3b-4152-a7d4-813caa32e24d-webhook-cert\") pod \"64515697-6c3b-4152-a7d4-813caa32e24d\" (UID: \"64515697-6c3b-4152-a7d4-813caa32e24d\") "
	Jan 31 02:11:26 addons-165032 kubelet[1244]: I0131 02:11:26.860909    1244 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64515697-6c3b-4152-a7d4-813caa32e24d-kube-api-access-nls2c" (OuterVolumeSpecName: "kube-api-access-nls2c") pod "64515697-6c3b-4152-a7d4-813caa32e24d" (UID: "64515697-6c3b-4152-a7d4-813caa32e24d"). InnerVolumeSpecName "kube-api-access-nls2c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 31 02:11:26 addons-165032 kubelet[1244]: I0131 02:11:26.861261    1244 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64515697-6c3b-4152-a7d4-813caa32e24d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "64515697-6c3b-4152-a7d4-813caa32e24d" (UID: "64515697-6c3b-4152-a7d4-813caa32e24d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 31 02:11:26 addons-165032 kubelet[1244]: I0131 02:11:26.957725    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nls2c\" (UniqueName: \"kubernetes.io/projected/64515697-6c3b-4152-a7d4-813caa32e24d-kube-api-access-nls2c\") on node \"addons-165032\" DevicePath \"\""
	Jan 31 02:11:26 addons-165032 kubelet[1244]: I0131 02:11:26.957789    1244 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64515697-6c3b-4152-a7d4-813caa32e24d-webhook-cert\") on node \"addons-165032\" DevicePath \"\""
	Jan 31 02:11:27 addons-165032 kubelet[1244]: I0131 02:11:27.447795    1244 scope.go:117] "RemoveContainer" containerID="6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a"
	Jan 31 02:11:27 addons-165032 kubelet[1244]: I0131 02:11:27.476716    1244 scope.go:117] "RemoveContainer" containerID="6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a"
	Jan 31 02:11:27 addons-165032 kubelet[1244]: E0131 02:11:27.477298    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a\": container with ID starting with 6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a not found: ID does not exist" containerID="6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a"
	Jan 31 02:11:27 addons-165032 kubelet[1244]: I0131 02:11:27.477408    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a"} err="failed to get container status \"6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a\": rpc error: code = NotFound desc = could not find container \"6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a\": container with ID starting with 6c94e74ea0be1ff9a9bab91f0fd33d9051315e13057a8eea926b612cb083328a not found: ID does not exist"
	Jan 31 02:11:28 addons-165032 kubelet[1244]: I0131 02:11:28.698734    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="64515697-6c3b-4152-a7d4-813caa32e24d" path="/var/lib/kubelet/pods/64515697-6c3b-4152-a7d4-813caa32e24d/volumes"
	
	
	==> storage-provisioner [0a4e20d7a1bbe55a3dc0024fc037e2f4ae88ad7c644d5c90fa4248114a91d0e0] <==
	I0131 02:06:24.333912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 02:06:24.356890       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 02:06:24.356948       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 02:06:24.593134       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 02:06:24.598252       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-165032_57a41a60-5483-4316-9120-0d94c28da6d3!
	I0131 02:06:24.616712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a46ee1ed-a341-4abc-9e1e-47a7364666ca", APIVersion:"v1", ResourceVersion:"869", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-165032_57a41a60-5483-4316-9120-0d94c28da6d3 became leader
	I0131 02:06:24.702683       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-165032_57a41a60-5483-4316-9120-0d94c28da6d3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-165032 -n addons-165032
helpers_test.go:261: (dbg) Run:  kubectl --context addons-165032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-165032
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-165032: exit status 82 (2m0.298649156s)

                                                
                                                
-- stdout --
	* Stopping node "addons-165032"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-165032" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-165032
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-165032: exit status 11 (21.506476451s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-165032" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-165032
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-165032: exit status 11 (6.144877334s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-165032" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-165032
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-165032: exit status 11 (6.143641236s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-165032" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-618885 /tmp/TestFunctionalserialCacheCmdcacheadd_local13523724/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cache add minikube-local-cache-test:functional-618885
functional_test.go:1085: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 cache add minikube-local-cache-test:functional-618885: exit status 10 (1.13320702s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: Failed to cache and load images: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/minikube-local-cache-test_functional-618885": write: unable to calculate manifest: blob sha256:0d459ee88c2113f2679db4c8d19c2d1e96108ae911f3639b946bb6775116fc07 not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_8cee69e8e9d44269643705dd5a80fa16a37a5186_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to 'cache add' local image "minikube-local-cache-test:functional-618885". args "out/minikube-linux-amd64 -p functional-618885 cache add minikube-local-cache-test:functional-618885" err exit status 10
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cache delete minikube-local-cache-test:functional-618885
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 cache delete minikube-local-cache-test:functional-618885: exit status 30 (70.110272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: Failed to delete images: remove /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/minikube-local-cache-test_functional-618885: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_c58101ff502dcd50a28b66c2886a5f157b1a787f_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1092: failed to 'cache delete' local image "minikube-local-cache-test:functional-618885". args "out/minikube-linux-amd64 -p functional-618885 cache delete minikube-local-cache-test:functional-618885" err exit status 30
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-618885
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image load --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr
functional_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 image load --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr: exit status 80 (1.121925749s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:06.465958 1428504 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:06.466238 1428504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:06.466253 1428504 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:06.466261 1428504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:06.466476 1428504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:06.467124 1428504 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:06.467222 1428504 cache.go:107] acquiring lock: {Name:mke868afccda0f834fd95bd10bbb771a42905080 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:06.467452 1428504 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:06.469413 1428504 image.go:173] found gcr.io/google-containers/addon-resizer:functional-618885 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-618885 original:gcr.io/google-containers/addon-resizer:functional-618885} opener:0xc0008b4000 tarballImage:<nil> computed:false id:0xc00059c040 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 02:18:06.469450 1428504 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885
	I0131 02:18:07.511084 1428504 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-618885" -> "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885" took 1.043876028s
	I0131 02:18:07.513508 1428504 out.go:177] 
	W0131 02:18:07.514826 1428504 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 02:18:07.514844 1428504 out.go:239] * 
	* 
	W0131 02:18:07.519570 1428504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:07.521157 1428504 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:356: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:06.465958 1428504 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:06.466238 1428504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:06.466253 1428504 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:06.466261 1428504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:06.466476 1428504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:06.467124 1428504 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:06.467222 1428504 cache.go:107] acquiring lock: {Name:mke868afccda0f834fd95bd10bbb771a42905080 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:06.467452 1428504 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:06.469413 1428504 image.go:173] found gcr.io/google-containers/addon-resizer:functional-618885 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-618885 original:gcr.io/google-containers/addon-resizer:functional-618885} opener:0xc0008b4000 tarballImage:<nil> computed:false id:0xc00059c040 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 02:18:06.469450 1428504 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885
	I0131 02:18:07.511084 1428504 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-618885" -> "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885" took 1.043876028s
	I0131 02:18:07.513508 1428504 out.go:177] 
	W0131 02:18:07.514826 1428504 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 02:18:07.514844 1428504 out.go:239] * 
	* 
	W0131 02:18:07.519570 1428504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:07.521157 1428504 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image load --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr
2024/01/31 02:18:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 image load --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr: exit status 80 (739.765972ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:07.591158 1428527 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:07.591303 1428527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:07.591317 1428527 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:07.591324 1428527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:07.591557 1428527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:07.592233 1428527 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:07.592315 1428527 cache.go:107] acquiring lock: {Name:mke868afccda0f834fd95bd10bbb771a42905080 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:07.592432 1428527 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:07.594333 1428527 image.go:173] found gcr.io/google-containers/addon-resizer:functional-618885 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-618885 original:gcr.io/google-containers/addon-resizer:functional-618885} opener:0xc0007261c0 tarballImage:<nil> computed:false id:0xc000c24080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 02:18:07.594360 1428527 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885
	I0131 02:18:08.249521 1428527 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-618885" -> "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885" took 657.214391ms
	I0131 02:18:08.251625 1428527 out.go:177] 
	W0131 02:18:08.253357 1428527 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 02:18:08.253379 1428527 out.go:239] * 
	* 
	W0131 02:18:08.258647 1428527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:08.260307 1428527 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:366: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:07.591158 1428527 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:07.591303 1428527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:07.591317 1428527 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:07.591324 1428527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:07.591557 1428527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:07.592233 1428527 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:07.592315 1428527 cache.go:107] acquiring lock: {Name:mke868afccda0f834fd95bd10bbb771a42905080 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:07.592432 1428527 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:07.594333 1428527 image.go:173] found gcr.io/google-containers/addon-resizer:functional-618885 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-618885 original:gcr.io/google-containers/addon-resizer:functional-618885} opener:0xc0007261c0 tarballImage:<nil> computed:false id:0xc000c24080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 02:18:07.594360 1428527 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885
	I0131 02:18:08.249521 1428527 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-618885" -> "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885" took 657.214391ms
	I0131 02:18:08.251625 1428527 out.go:177] 
	W0131 02:18:08.253357 1428527 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0131 02:18:08.253379 1428527 out.go:239] * 
	* 
	W0131 02:18:08.258647 1428527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:08.260307 1428527 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.142468071s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-618885
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image load --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr
functional_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 image load --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr: exit status 80 (699.769191ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:10.492701 1428603 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:10.492923 1428603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:10.492933 1428603 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:10.492938 1428603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:10.493123 1428603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:10.493700 1428603 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:10.493773 1428603 cache.go:107] acquiring lock: {Name:mke868afccda0f834fd95bd10bbb771a42905080 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:10.493849 1428603 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:10.495538 1428603 image.go:173] found gcr.io/google-containers/addon-resizer:functional-618885 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-618885 original:gcr.io/google-containers/addon-resizer:functional-618885} opener:0xc0007cafc0 tarballImage:<nil> computed:false id:0xc000784040 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 02:18:10.495569 1428603 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885
	I0131 02:18:11.113375 1428603 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-618885" -> "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885" took 619.609646ms
	I0131 02:18:11.116392 1428603 out.go:177] 
	W0131 02:18:11.117949 1428603 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0131 02:18:11.117967 1428603 out.go:239] * 
	* 
	W0131 02:18:11.122627 1428603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:11.124096 1428603 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:246: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:10.492701 1428603 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:10.492923 1428603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:10.492933 1428603 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:10.492938 1428603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:10.493123 1428603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:10.493700 1428603 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:10.493773 1428603 cache.go:107] acquiring lock: {Name:mke868afccda0f834fd95bd10bbb771a42905080 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:10.493849 1428603 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:10.495538 1428603 image.go:173] found gcr.io/google-containers/addon-resizer:functional-618885 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-618885 original:gcr.io/google-containers/addon-resizer:functional-618885} opener:0xc0007cafc0 tarballImage:<nil> computed:false id:0xc000784040 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0131 02:18:10.495569 1428603 cache.go:162] opening:  /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885
	I0131 02:18:11.113375 1428603 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-618885" -> "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885" took 619.609646ms
	I0131 02:18:11.116392 1428603 out.go:177] 
	W0131 02:18:11.117949 1428603 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0131 02:18:11.117967 1428603 out.go:239] * 
	* 
	W0131 02:18:11.122627 1428603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:11.124096 1428603 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image save gcr.io/google-containers/addon-resizer:functional-618885 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0131 02:18:12.117142 1428686 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:12.117498 1428686 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:12.117512 1428686 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:12.117517 1428686 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:12.117709 1428686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:12.118432 1428686 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:12.118561 1428686 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:12.118990 1428686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:12.119041 1428686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:12.135367 1428686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0131 02:18:12.135929 1428686 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:12.136605 1428686 main.go:141] libmachine: Using API Version  1
	I0131 02:18:12.136633 1428686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:12.137016 1428686 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:12.137266 1428686 main.go:141] libmachine: (functional-618885) Calling .GetState
	I0131 02:18:12.139497 1428686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:12.139552 1428686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:12.156633 1428686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33297
	I0131 02:18:12.157220 1428686 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:12.157835 1428686 main.go:141] libmachine: Using API Version  1
	I0131 02:18:12.157870 1428686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:12.158274 1428686 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:12.158552 1428686 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:18:12.158847 1428686 ssh_runner.go:195] Run: systemctl --version
	I0131 02:18:12.158879 1428686 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
	I0131 02:18:12.162211 1428686 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
	I0131 02:18:12.162694 1428686 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
	I0131 02:18:12.162729 1428686 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
	I0131 02:18:12.162922 1428686 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
	I0131 02:18:12.163100 1428686 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
	I0131 02:18:12.163307 1428686 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
	I0131 02:18:12.163593 1428686 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
	I0131 02:18:12.294837 1428686 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0131 02:18:12.294918 1428686 cache_images.go:254] Failed to load cached images for profile functional-618885. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0131 02:18:12.294975 1428686 cache_images.go:262] succeeded pushing to: 
	I0131 02:18:12.294983 1428686 cache_images.go:263] failed pushing to: functional-618885
	I0131 02:18:12.295015 1428686 main.go:141] libmachine: Making call to close driver server
	I0131 02:18:12.295031 1428686 main.go:141] libmachine: (functional-618885) Calling .Close
	I0131 02:18:12.295341 1428686 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:18:12.295359 1428686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:18:12.295369 1428686 main.go:141] libmachine: Making call to close driver server
	I0131 02:18:12.295390 1428686 main.go:141] libmachine: (functional-618885) Calling .Close
	I0131 02:18:12.295632 1428686 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:18:12.295698 1428686 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:18:12.295653 1428686 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-618885
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image save --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 image save --daemon gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr: exit status 80 (405.471198ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:12.393919 1428730 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:12.394088 1428730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:12.394100 1428730 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:12.394105 1428730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:12.394326 1428730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:12.395006 1428730 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:12.395050 1428730 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-618885"]
	I0131 02:18:12.395169 1428730 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:12.395652 1428730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:12.395704 1428730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:12.412143 1428730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0131 02:18:12.412655 1428730 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:12.413257 1428730 main.go:141] libmachine: Using API Version  1
	I0131 02:18:12.413282 1428730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:12.413652 1428730 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:12.413893 1428730 main.go:141] libmachine: (functional-618885) Calling .GetState
	I0131 02:18:12.416120 1428730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:12.416175 1428730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:12.432842 1428730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0131 02:18:12.433316 1428730 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:12.433848 1428730 main.go:141] libmachine: Using API Version  1
	I0131 02:18:12.433879 1428730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:12.434333 1428730 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:12.434563 1428730 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:18:12.434753 1428730 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-618885]
	I0131 02:18:12.434905 1428730 ssh_runner.go:195] Run: systemctl --version
	I0131 02:18:12.434943 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
	I0131 02:18:12.438352 1428730 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
	I0131 02:18:12.438814 1428730 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
	I0131 02:18:12.438846 1428730 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
	I0131 02:18:12.438994 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
	I0131 02:18:12.439185 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
	I0131 02:18:12.439341 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
	I0131 02:18:12.439482 1428730 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
	I0131 02:18:12.585529 1428730 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:12.712032 1428730 cache_images.go:345] SaveImages completed in 277.253669ms
	W0131 02:18:12.712065 1428730 cache_images.go:442] Failed to load cached images for profile functional-618885. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-618885 not found
	I0131 02:18:12.712080 1428730 cache_images.go:450] succeeded pulling from : 
	I0131 02:18:12.712085 1428730 cache_images.go:451] failed pulling from : functional-618885
	I0131 02:18:12.712115 1428730 main.go:141] libmachine: Making call to close driver server
	I0131 02:18:12.712126 1428730 main.go:141] libmachine: (functional-618885) Calling .Close
	I0131 02:18:12.712497 1428730 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
	I0131 02:18:12.712564 1428730 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:18:12.712573 1428730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:18:12.712590 1428730 main.go:141] libmachine: Making call to close driver server
	I0131 02:18:12.712600 1428730 main.go:141] libmachine: (functional-618885) Calling .Close
	I0131 02:18:12.712885 1428730 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:18:12.712909 1428730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:18:12.715862 1428730 out.go:177] 
	W0131 02:18:12.717734 1428730 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885: no such file or directory
	W0131 02:18:12.717758 1428730 out.go:239] * 
	* 
	W0131 02:18:12.725096 1428730 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:12.726935 1428730 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:18:12.393919 1428730 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:12.394088 1428730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:12.394100 1428730 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:12.394105 1428730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:12.394326 1428730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:12.395006 1428730 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:12.395050 1428730 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-618885"]
	I0131 02:18:12.395169 1428730 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:18:12.395652 1428730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:12.395704 1428730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:12.412143 1428730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0131 02:18:12.412655 1428730 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:12.413257 1428730 main.go:141] libmachine: Using API Version  1
	I0131 02:18:12.413282 1428730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:12.413652 1428730 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:12.413893 1428730 main.go:141] libmachine: (functional-618885) Calling .GetState
	I0131 02:18:12.416120 1428730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:12.416175 1428730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:12.432842 1428730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0131 02:18:12.433316 1428730 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:12.433848 1428730 main.go:141] libmachine: Using API Version  1
	I0131 02:18:12.433879 1428730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:12.434333 1428730 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:12.434563 1428730 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:18:12.434753 1428730 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-618885]
	I0131 02:18:12.434905 1428730 ssh_runner.go:195] Run: systemctl --version
	I0131 02:18:12.434943 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
	I0131 02:18:12.438352 1428730 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
	I0131 02:18:12.438814 1428730 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
	I0131 02:18:12.438846 1428730 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
	I0131 02:18:12.438994 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
	I0131 02:18:12.439185 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
	I0131 02:18:12.439341 1428730 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
	I0131 02:18:12.439482 1428730 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
	I0131 02:18:12.585529 1428730 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-618885
	I0131 02:18:12.712032 1428730 cache_images.go:345] SaveImages completed in 277.253669ms
	W0131 02:18:12.712065 1428730 cache_images.go:442] Failed to load cached images for profile functional-618885. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-618885 not found
	I0131 02:18:12.712080 1428730 cache_images.go:450] succeeded pulling from : 
	I0131 02:18:12.712085 1428730 cache_images.go:451] failed pulling from : functional-618885
	I0131 02:18:12.712115 1428730 main.go:141] libmachine: Making call to close driver server
	I0131 02:18:12.712126 1428730 main.go:141] libmachine: (functional-618885) Calling .Close
	I0131 02:18:12.712497 1428730 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
	I0131 02:18:12.712564 1428730 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:18:12.712573 1428730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:18:12.712590 1428730 main.go:141] libmachine: Making call to close driver server
	I0131 02:18:12.712600 1428730 main.go:141] libmachine: (functional-618885) Calling .Close
	I0131 02:18:12.712885 1428730 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:18:12.712909 1428730 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:18:12.715862 1428730 out.go:177] 
	W0131 02:18:12.717734 1428730 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-618885: no such file or directory
	W0131 02:18:12.717758 1428730 out.go:239] * 
	* 
	W0131 02:18:12.725096 1428730 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 02:18:12.726935 1428730 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (170.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-757160 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-757160 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.200232522s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-757160 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-757160 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [77f68f24-e339-478e-9821-f76c8e02fee9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [77f68f24-e339-478e-9821-f76c8e02fee9] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.00421631s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0131 02:21:22.193753 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:22:48.510109 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:48.515368 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:48.525694 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:48.546006 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:48.586359 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:48.666750 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:48.827173 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:49.147835 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:49.788807 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:51.069538 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:53.630826 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:22:58.751414 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-757160 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.708608831s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-757160 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.40
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons disable ingress-dns --alsologtostderr -v=1
E0131 02:23:08.991696 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons disable ingress-dns --alsologtostderr -v=1: (9.664242559s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons disable ingress --alsologtostderr -v=1: (7.533816763s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-757160 -n ingress-addon-legacy-757160
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757160 logs -n 25: (1.093216455s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-618885 image load --daemon                                     | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC |                     |
	|                | gcr.io/google-containers/addon-resizer:functional-618885                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-618885 image save                                              | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-618885                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-618885 image rm                                                | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-618885                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-618885 image ls                                                | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	| image          | functional-618885 image load                                              | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-618885 image save --daemon                                     | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC |                     |
	|                | gcr.io/google-containers/addon-resizer:functional-618885                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| service        | functional-618885 service                                                 | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | hello-node-connect --url                                                  |                             |         |         |                     |                     |
	| image          | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-618885 ssh pgrep                                               | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-618885                                                         | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-618885 image build -t                                          | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	|                | localhost/my-image:functional-618885                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-618885 image ls                                                | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	| delete         | -p functional-618885                                                      | functional-618885           | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:18 UTC |
	| start          | -p ingress-addon-legacy-757160                                            | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:18 UTC | 31 Jan 24 02:20 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-757160                                               | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:20 UTC | 31 Jan 24 02:20 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-757160                                               | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:20 UTC | 31 Jan 24 02:20 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-757160                                               | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:20 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-757160 ip                                            | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:23 UTC | 31 Jan 24 02:23 UTC |
	| addons         | ingress-addon-legacy-757160                                               | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:23 UTC | 31 Jan 24 02:23 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-757160                                               | ingress-addon-legacy-757160 | jenkins | v1.32.0 | 31 Jan 24 02:23 UTC | 31 Jan 24 02:23 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:18:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:18:41.722439 1429296 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:18:41.722739 1429296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:41.722749 1429296 out.go:309] Setting ErrFile to fd 2...
	I0131 02:18:41.722754 1429296 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:18:41.722984 1429296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:18:41.723670 1429296 out.go:303] Setting JSON to false
	I0131 02:18:41.724672 1429296 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":25265,"bootTime":1706642257,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:18:41.724738 1429296 start.go:138] virtualization: kvm guest
	I0131 02:18:41.726898 1429296 out.go:177] * [ingress-addon-legacy-757160] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:18:41.728365 1429296 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 02:18:41.728393 1429296 notify.go:220] Checking for updates...
	I0131 02:18:41.729790 1429296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:18:41.731338 1429296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:18:41.732777 1429296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:18:41.734338 1429296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 02:18:41.735702 1429296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 02:18:41.737257 1429296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:18:41.772822 1429296 out.go:177] * Using the kvm2 driver based on user configuration
	I0131 02:18:41.774139 1429296 start.go:298] selected driver: kvm2
	I0131 02:18:41.774152 1429296 start.go:902] validating driver "kvm2" against <nil>
	I0131 02:18:41.774168 1429296 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 02:18:41.774954 1429296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:41.775052 1429296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:18:41.790394 1429296 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:18:41.790499 1429296 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 02:18:41.790720 1429296 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 02:18:41.790779 1429296 cni.go:84] Creating CNI manager for ""
	I0131 02:18:41.790793 1429296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:18:41.790802 1429296 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0131 02:18:41.790814 1429296 start_flags.go:321] config:
	{Name:ingress-addon-legacy-757160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-757160 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:18:41.790973 1429296 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:18:41.792797 1429296 out.go:177] * Starting control plane node ingress-addon-legacy-757160 in cluster ingress-addon-legacy-757160
	I0131 02:18:41.794182 1429296 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0131 02:18:42.158703 1429296 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0131 02:18:42.158757 1429296 cache.go:56] Caching tarball of preloaded images
	I0131 02:18:42.158943 1429296 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0131 02:18:42.160938 1429296 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0131 02:18:42.162342 1429296 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:18:42.260549 1429296 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0131 02:18:54.601380 1429296 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:18:54.601484 1429296 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:18:55.734957 1429296 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0131 02:18:55.735336 1429296 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/config.json ...
	I0131 02:18:55.735378 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/config.json: {Name:mk70d7ad75392e28e81b80b5dced68e32cc27cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:18:55.735785 1429296 start.go:365] acquiring machines lock for ingress-addon-legacy-757160: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:18:55.735849 1429296 start.go:369] acquired machines lock for "ingress-addon-legacy-757160" in 37.65µs
	I0131 02:18:55.735876 1429296 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-757160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-757160 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 02:18:55.735951 1429296 start.go:125] createHost starting for "" (driver="kvm2")
	I0131 02:18:55.737842 1429296 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0131 02:18:55.738012 1429296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:18:55.738076 1429296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:18:55.752634 1429296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0131 02:18:55.753130 1429296 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:18:55.753728 1429296 main.go:141] libmachine: Using API Version  1
	I0131 02:18:55.753753 1429296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:18:55.754132 1429296 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:18:55.754343 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetMachineName
	I0131 02:18:55.754521 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:18:55.754680 1429296 start.go:159] libmachine.API.Create for "ingress-addon-legacy-757160" (driver="kvm2")
	I0131 02:18:55.754705 1429296 client.go:168] LocalClient.Create starting
	I0131 02:18:55.754745 1429296 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem
	I0131 02:18:55.754783 1429296 main.go:141] libmachine: Decoding PEM data...
	I0131 02:18:55.754799 1429296 main.go:141] libmachine: Parsing certificate...
	I0131 02:18:55.754854 1429296 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem
	I0131 02:18:55.754872 1429296 main.go:141] libmachine: Decoding PEM data...
	I0131 02:18:55.754885 1429296 main.go:141] libmachine: Parsing certificate...
	I0131 02:18:55.754904 1429296 main.go:141] libmachine: Running pre-create checks...
	I0131 02:18:55.754914 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .PreCreateCheck
	I0131 02:18:55.755265 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetConfigRaw
	I0131 02:18:55.755638 1429296 main.go:141] libmachine: Creating machine...
	I0131 02:18:55.755652 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .Create
	I0131 02:18:55.755775 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Creating KVM machine...
	I0131 02:18:55.757026 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found existing default KVM network
	I0131 02:18:55.757826 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:55.757672 1429352 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I0131 02:18:55.763256 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | trying to create private KVM network mk-ingress-addon-legacy-757160 192.168.39.0/24...
	I0131 02:18:55.834571 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | private KVM network mk-ingress-addon-legacy-757160 192.168.39.0/24 created
	I0131 02:18:55.834623 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:55.834532 1429352 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:18:55.834640 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting up store path in /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160 ...
	I0131 02:18:55.834662 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Building disk image from file:///home/jenkins/minikube-integration/18051-1412717/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0131 02:18:55.834680 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Downloading /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18051-1412717/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0131 02:18:56.066612 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:56.066470 1429352 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa...
	I0131 02:18:56.246157 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:56.245968 1429352 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/ingress-addon-legacy-757160.rawdisk...
	I0131 02:18:56.246202 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Writing magic tar header
	I0131 02:18:56.246225 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Writing SSH key tar header
	I0131 02:18:56.246249 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:56.246130 1429352 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160 ...
	I0131 02:18:56.246274 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160
	I0131 02:18:56.246367 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160 (perms=drwx------)
	I0131 02:18:56.246396 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines
	I0131 02:18:56.246415 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717/.minikube/machines (perms=drwxr-xr-x)
	I0131 02:18:56.246436 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717/.minikube (perms=drwxr-xr-x)
	I0131 02:18:56.246450 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting executable bit set on /home/jenkins/minikube-integration/18051-1412717 (perms=drwxrwxr-x)
	I0131 02:18:56.246469 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0131 02:18:56.246502 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0131 02:18:56.246519 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:18:56.246539 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18051-1412717
	I0131 02:18:56.246553 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0131 02:18:56.246565 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home/jenkins
	I0131 02:18:56.246574 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Checking permissions on dir: /home
	I0131 02:18:56.246591 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Skipping /home - not owner
	I0131 02:18:56.246612 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Creating domain...
	I0131 02:18:56.247788 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) define libvirt domain using xml: 
	I0131 02:18:56.247827 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) <domain type='kvm'>
	I0131 02:18:56.247854 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <name>ingress-addon-legacy-757160</name>
	I0131 02:18:56.247878 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <memory unit='MiB'>4096</memory>
	I0131 02:18:56.247887 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <vcpu>2</vcpu>
	I0131 02:18:56.247898 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <features>
	I0131 02:18:56.247934 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <acpi/>
	I0131 02:18:56.247970 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <apic/>
	I0131 02:18:56.247992 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <pae/>
	I0131 02:18:56.248011 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     
	I0131 02:18:56.248028 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   </features>
	I0131 02:18:56.248042 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <cpu mode='host-passthrough'>
	I0131 02:18:56.248056 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   
	I0131 02:18:56.248068 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   </cpu>
	I0131 02:18:56.248088 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <os>
	I0131 02:18:56.248109 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <type>hvm</type>
	I0131 02:18:56.248118 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <boot dev='cdrom'/>
	I0131 02:18:56.248124 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <boot dev='hd'/>
	I0131 02:18:56.248130 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <bootmenu enable='no'/>
	I0131 02:18:56.248135 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   </os>
	I0131 02:18:56.248144 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   <devices>
	I0131 02:18:56.248154 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <disk type='file' device='cdrom'>
	I0131 02:18:56.248174 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <source file='/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/boot2docker.iso'/>
	I0131 02:18:56.248193 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <target dev='hdc' bus='scsi'/>
	I0131 02:18:56.248207 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <readonly/>
	I0131 02:18:56.248220 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </disk>
	I0131 02:18:56.248235 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <disk type='file' device='disk'>
	I0131 02:18:56.248247 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0131 02:18:56.248263 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <source file='/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/ingress-addon-legacy-757160.rawdisk'/>
	I0131 02:18:56.248280 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <target dev='hda' bus='virtio'/>
	I0131 02:18:56.248295 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </disk>
	I0131 02:18:56.248308 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <interface type='network'>
	I0131 02:18:56.248324 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <source network='mk-ingress-addon-legacy-757160'/>
	I0131 02:18:56.248336 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <model type='virtio'/>
	I0131 02:18:56.248357 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </interface>
	I0131 02:18:56.248375 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <interface type='network'>
	I0131 02:18:56.248391 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <source network='default'/>
	I0131 02:18:56.248414 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <model type='virtio'/>
	I0131 02:18:56.248428 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </interface>
	I0131 02:18:56.248441 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <serial type='pty'>
	I0131 02:18:56.248453 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <target port='0'/>
	I0131 02:18:56.248464 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </serial>
	I0131 02:18:56.248483 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <console type='pty'>
	I0131 02:18:56.248503 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <target type='serial' port='0'/>
	I0131 02:18:56.248518 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </console>
	I0131 02:18:56.248531 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     <rng model='virtio'>
	I0131 02:18:56.248547 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)       <backend model='random'>/dev/random</backend>
	I0131 02:18:56.248561 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     </rng>
	I0131 02:18:56.248578 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     
	I0131 02:18:56.248595 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)     
	I0131 02:18:56.248609 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160)   </devices>
	I0131 02:18:56.248620 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) </domain>
	I0131 02:18:56.248633 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) 
	I0131 02:18:56.252665 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:7a:d4:54 in network default
	I0131 02:18:56.253356 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Ensuring networks are active...
	I0131 02:18:56.253379 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:56.254108 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Ensuring network default is active
	I0131 02:18:56.254401 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Ensuring network mk-ingress-addon-legacy-757160 is active
	I0131 02:18:56.254910 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Getting domain xml...
	I0131 02:18:56.255667 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Creating domain...
	I0131 02:18:57.450124 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Waiting to get IP...
	I0131 02:18:57.450977 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:57.451367 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:18:57.451432 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:57.451350 1429352 retry.go:31] will retry after 288.192285ms: waiting for machine to come up
	I0131 02:18:57.741018 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:57.741435 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:18:57.741469 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:57.741406 1429352 retry.go:31] will retry after 307.700666ms: waiting for machine to come up
	I0131 02:18:58.051044 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:58.051436 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:18:58.051466 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:58.051379 1429352 retry.go:31] will retry after 413.006875ms: waiting for machine to come up
	I0131 02:18:58.465977 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:58.466409 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:18:58.466443 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:58.466336 1429352 retry.go:31] will retry after 505.375008ms: waiting for machine to come up
	I0131 02:18:58.973280 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:58.973733 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:18:58.973762 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:58.973676 1429352 retry.go:31] will retry after 499.896011ms: waiting for machine to come up
	I0131 02:18:59.475406 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:18:59.475777 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:18:59.475808 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:18:59.475751 1429352 retry.go:31] will retry after 818.022072ms: waiting for machine to come up
	I0131 02:19:00.295831 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:00.296288 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:00.296314 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:00.296215 1429352 retry.go:31] will retry after 823.33279ms: waiting for machine to come up
	I0131 02:19:01.120534 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:01.120979 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:01.121008 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:01.120929 1429352 retry.go:31] will retry after 956.096549ms: waiting for machine to come up
	I0131 02:19:02.078720 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:02.079187 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:02.079214 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:02.079142 1429352 retry.go:31] will retry after 1.25889969s: waiting for machine to come up
	I0131 02:19:03.339627 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:03.340015 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:03.340046 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:03.339956 1429352 retry.go:31] will retry after 1.608980982s: waiting for machine to come up
	I0131 02:19:04.951065 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:04.951596 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:04.951620 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:04.951572 1429352 retry.go:31] will retry after 2.4646761s: waiting for machine to come up
	I0131 02:19:07.418766 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:07.419270 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:07.419306 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:07.419216 1429352 retry.go:31] will retry after 2.773537022s: waiting for machine to come up
	I0131 02:19:10.196176 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:10.196591 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:10.196621 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:10.196529 1429352 retry.go:31] will retry after 4.097409457s: waiting for machine to come up
	I0131 02:19:14.295552 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:14.295999 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find current IP address of domain ingress-addon-legacy-757160 in network mk-ingress-addon-legacy-757160
	I0131 02:19:14.296025 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | I0131 02:19:14.295946 1429352 retry.go:31] will retry after 4.351474657s: waiting for machine to come up
	I0131 02:19:18.650846 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.651331 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Found IP for machine: 192.168.39.40
	I0131 02:19:18.651358 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has current primary IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.651367 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Reserving static IP address...
	I0131 02:19:18.651728 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-757160", mac: "52:54:00:26:30:ed", ip: "192.168.39.40"} in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.729285 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Getting to WaitForSSH function...
	I0131 02:19:18.729327 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Reserved static IP address: 192.168.39.40
	I0131 02:19:18.729350 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Waiting for SSH to be available...
	I0131 02:19:18.732215 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.732641 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:minikube Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:18.732681 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.732807 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Using SSH client type: external
	I0131 02:19:18.732835 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa (-rw-------)
	I0131 02:19:18.732879 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 02:19:18.732897 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | About to run SSH command:
	I0131 02:19:18.732915 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | exit 0
	I0131 02:19:18.826509 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | SSH cmd err, output: <nil>: 
	I0131 02:19:18.826815 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) KVM machine creation complete!
	I0131 02:19:18.827113 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetConfigRaw
	I0131 02:19:18.827695 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:18.827942 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:18.828140 1429296 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0131 02:19:18.828156 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetState
	I0131 02:19:18.829538 1429296 main.go:141] libmachine: Detecting operating system of created instance...
	I0131 02:19:18.829555 1429296 main.go:141] libmachine: Waiting for SSH to be available...
	I0131 02:19:18.829564 1429296 main.go:141] libmachine: Getting to WaitForSSH function...
	I0131 02:19:18.829576 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:18.831812 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.832188 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:18.832220 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.832326 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:18.832531 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:18.832702 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:18.832829 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:18.833031 1429296 main.go:141] libmachine: Using SSH client type: native
	I0131 02:19:18.833367 1429296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0131 02:19:18.833379 1429296 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0131 02:19:18.957572 1429296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:19:18.957613 1429296 main.go:141] libmachine: Detecting the provisioner...
	I0131 02:19:18.957628 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:18.960696 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.961070 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:18.961099 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:18.961310 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:18.961571 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:18.961754 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:18.961938 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:18.962117 1429296 main.go:141] libmachine: Using SSH client type: native
	I0131 02:19:18.962501 1429296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0131 02:19:18.962517 1429296 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0131 02:19:19.091170 1429296 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0131 02:19:19.091263 1429296 main.go:141] libmachine: found compatible host: buildroot
	I0131 02:19:19.091280 1429296 main.go:141] libmachine: Provisioning with buildroot...
	I0131 02:19:19.091298 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetMachineName
	I0131 02:19:19.091612 1429296 buildroot.go:166] provisioning hostname "ingress-addon-legacy-757160"
	I0131 02:19:19.091645 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetMachineName
	I0131 02:19:19.091870 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:19.094940 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.095434 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.095474 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.095667 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:19.095856 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.096022 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.096176 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:19.096332 1429296 main.go:141] libmachine: Using SSH client type: native
	I0131 02:19:19.096658 1429296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0131 02:19:19.096673 1429296 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-757160 && echo "ingress-addon-legacy-757160" | sudo tee /etc/hostname
	I0131 02:19:19.233742 1429296 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-757160
	
	I0131 02:19:19.233784 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:19.236614 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.236979 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.237014 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.237221 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:19.237429 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.237722 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.237874 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:19.238055 1429296 main.go:141] libmachine: Using SSH client type: native
	I0131 02:19:19.238381 1429296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0131 02:19:19.238400 1429296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-757160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-757160/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-757160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 02:19:19.373737 1429296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:19:19.373778 1429296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 02:19:19.373807 1429296 buildroot.go:174] setting up certificates
	I0131 02:19:19.373819 1429296 provision.go:83] configureAuth start
	I0131 02:19:19.373834 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetMachineName
	I0131 02:19:19.374170 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetIP
	I0131 02:19:19.376890 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.377283 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.377320 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.377438 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:19.379479 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.379762 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.379787 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.379893 1429296 provision.go:138] copyHostCerts
	I0131 02:19:19.379930 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:19:19.379989 1429296 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 02:19:19.380000 1429296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:19:19.380081 1429296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 02:19:19.380172 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:19:19.380197 1429296 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 02:19:19.380207 1429296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:19:19.380245 1429296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 02:19:19.380305 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:19:19.380329 1429296 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 02:19:19.380337 1429296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:19:19.380371 1429296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 02:19:19.380432 1429296 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-757160 san=[192.168.39.40 192.168.39.40 localhost 127.0.0.1 minikube ingress-addon-legacy-757160]
	I0131 02:19:19.452324 1429296 provision.go:172] copyRemoteCerts
	I0131 02:19:19.452391 1429296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 02:19:19.452425 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:19.455269 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.455584 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.455619 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.455796 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:19.456012 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.456209 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:19.456369 1429296 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa Username:docker}
	I0131 02:19:19.548688 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0131 02:19:19.548780 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 02:19:19.570741 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0131 02:19:19.570823 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 02:19:19.594041 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0131 02:19:19.594146 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0131 02:19:19.616166 1429296 provision.go:86] duration metric: configureAuth took 242.325475ms
	I0131 02:19:19.616195 1429296 buildroot.go:189] setting minikube options for container-runtime
	I0131 02:19:19.616427 1429296 config.go:182] Loaded profile config "ingress-addon-legacy-757160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0131 02:19:19.616522 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:19.619156 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.619457 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.619498 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.619663 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:19.619872 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.620105 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.620243 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:19.620427 1429296 main.go:141] libmachine: Using SSH client type: native
	I0131 02:19:19.620792 1429296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0131 02:19:19.620814 1429296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 02:19:19.927681 1429296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 02:19:19.927718 1429296 main.go:141] libmachine: Checking connection to Docker...
	I0131 02:19:19.927731 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetURL
	I0131 02:19:19.929035 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Using libvirt version 6000000
	I0131 02:19:19.931156 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.931536 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.931569 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.931761 1429296 main.go:141] libmachine: Docker is up and running!
	I0131 02:19:19.931772 1429296 main.go:141] libmachine: Reticulating splines...
	I0131 02:19:19.931779 1429296 client.go:171] LocalClient.Create took 24.17706336s
	I0131 02:19:19.931804 1429296 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-757160" took 24.17712285s
	I0131 02:19:19.931848 1429296 start.go:300] post-start starting for "ingress-addon-legacy-757160" (driver="kvm2")
	I0131 02:19:19.931866 1429296 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 02:19:19.931900 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:19.932189 1429296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 02:19:19.932223 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:19.934167 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.934540 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:19.934571 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:19.934753 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:19.934913 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:19.935067 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:19.935191 1429296 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa Username:docker}
	I0131 02:19:20.026975 1429296 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 02:19:20.030837 1429296 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 02:19:20.030864 1429296 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 02:19:20.030931 1429296 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 02:19:20.031027 1429296 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 02:19:20.031041 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /etc/ssl/certs/14199762.pem
	I0131 02:19:20.031183 1429296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 02:19:20.038944 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:19:20.059779 1429296 start.go:303] post-start completed in 127.912562ms
	I0131 02:19:20.059848 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetConfigRaw
	I0131 02:19:20.060560 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetIP
	I0131 02:19:20.063418 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.063824 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:20.063860 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.064066 1429296 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/config.json ...
	I0131 02:19:20.064252 1429296 start.go:128] duration metric: createHost completed in 24.32828834s
	I0131 02:19:20.064276 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:20.066561 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.066864 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:20.066899 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.067170 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:20.067365 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:20.067534 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:20.067689 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:20.067855 1429296 main.go:141] libmachine: Using SSH client type: native
	I0131 02:19:20.068217 1429296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0131 02:19:20.068231 1429296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 02:19:20.195150 1429296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706667560.168354761
	
	I0131 02:19:20.195176 1429296 fix.go:206] guest clock: 1706667560.168354761
	I0131 02:19:20.195187 1429296 fix.go:219] Guest: 2024-01-31 02:19:20.168354761 +0000 UTC Remote: 2024-01-31 02:19:20.064264876 +0000 UTC m=+38.393498091 (delta=104.089885ms)
	I0131 02:19:20.195214 1429296 fix.go:190] guest clock delta is within tolerance: 104.089885ms
	I0131 02:19:20.195221 1429296 start.go:83] releasing machines lock for "ingress-addon-legacy-757160", held for 24.459361183s
	I0131 02:19:20.195250 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:20.195559 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetIP
	I0131 02:19:20.198013 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.198369 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:20.198404 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.198544 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:20.199016 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:20.199198 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:20.199325 1429296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 02:19:20.199386 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:20.199433 1429296 ssh_runner.go:195] Run: cat /version.json
	I0131 02:19:20.199461 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:20.202065 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.202089 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.202438 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:20.202490 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.202547 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:20.202574 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:20.202635 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:20.202843 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:20.202862 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:20.203029 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:20.203061 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:20.203140 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:20.203214 1429296 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa Username:docker}
	I0131 02:19:20.203280 1429296 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa Username:docker}
	I0131 02:19:20.332258 1429296 ssh_runner.go:195] Run: systemctl --version
	I0131 02:19:20.337905 1429296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 02:19:20.490193 1429296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 02:19:20.496341 1429296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 02:19:20.496428 1429296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 02:19:20.510699 1429296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 02:19:20.510727 1429296 start.go:475] detecting cgroup driver to use...
	I0131 02:19:20.510794 1429296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 02:19:20.525756 1429296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 02:19:20.539586 1429296 docker.go:217] disabling cri-docker service (if available) ...
	I0131 02:19:20.539655 1429296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 02:19:20.553859 1429296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 02:19:20.567842 1429296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 02:19:20.685757 1429296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 02:19:20.806619 1429296 docker.go:233] disabling docker service ...
	I0131 02:19:20.806698 1429296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 02:19:20.819673 1429296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 02:19:20.830549 1429296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 02:19:20.946386 1429296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 02:19:21.059586 1429296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 02:19:21.071711 1429296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 02:19:21.087509 1429296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0131 02:19:21.087587 1429296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:19:21.097204 1429296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 02:19:21.097295 1429296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:19:21.106341 1429296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:19:21.115325 1429296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:19:21.124253 1429296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 02:19:21.133417 1429296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 02:19:21.141602 1429296 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 02:19:21.141670 1429296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 02:19:21.153453 1429296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 02:19:21.161654 1429296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 02:19:21.272939 1429296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 02:19:21.430184 1429296 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 02:19:21.430279 1429296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 02:19:21.434597 1429296 start.go:543] Will wait 60s for crictl version
	I0131 02:19:21.434665 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:21.437904 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 02:19:21.474775 1429296 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 02:19:21.474864 1429296 ssh_runner.go:195] Run: crio --version
	I0131 02:19:21.512936 1429296 ssh_runner.go:195] Run: crio --version
	I0131 02:19:21.565498 1429296 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0131 02:19:21.567077 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetIP
	I0131 02:19:21.570810 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:21.571219 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:21.571244 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:21.571516 1429296 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 02:19:21.575646 1429296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:19:21.587857 1429296 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0131 02:19:21.587929 1429296 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:19:21.619966 1429296 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0131 02:19:21.620052 1429296 ssh_runner.go:195] Run: which lz4
	I0131 02:19:21.623449 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0131 02:19:21.623567 1429296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 02:19:21.627158 1429296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 02:19:21.627191 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0131 02:19:23.461133 1429296 crio.go:444] Took 1.837604 seconds to copy over tarball
	I0131 02:19:23.461225 1429296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 02:19:26.378360 1429296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.917099506s)
	I0131 02:19:26.378392 1429296 crio.go:451] Took 2.917232 seconds to extract the tarball
	I0131 02:19:26.378405 1429296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 02:19:26.420393 1429296 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:19:26.467575 1429296 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0131 02:19:26.467602 1429296 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 02:19:26.467679 1429296 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:19:26.467725 1429296 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0131 02:19:26.467765 1429296 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0131 02:19:26.467806 1429296 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0131 02:19:26.467885 1429296 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0131 02:19:26.467903 1429296 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0131 02:19:26.467703 1429296 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0131 02:19:26.467901 1429296 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0131 02:19:26.469216 1429296 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0131 02:19:26.469232 1429296 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0131 02:19:26.469259 1429296 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0131 02:19:26.469297 1429296 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0131 02:19:26.469219 1429296 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0131 02:19:26.469354 1429296 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0131 02:19:26.469411 1429296 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0131 02:19:26.469472 1429296 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:19:26.656562 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0131 02:19:26.662833 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0131 02:19:26.669373 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0131 02:19:26.690821 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0131 02:19:26.690821 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0131 02:19:26.692672 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0131 02:19:26.697455 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0131 02:19:26.753914 1429296 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0131 02:19:26.753966 1429296 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0131 02:19:26.754023 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.778261 1429296 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0131 02:19:26.778324 1429296 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0131 02:19:26.778387 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.784366 1429296 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0131 02:19:26.784418 1429296 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0131 02:19:26.784469 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.847067 1429296 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0131 02:19:26.847092 1429296 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0131 02:19:26.847121 1429296 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0131 02:19:26.847127 1429296 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0131 02:19:26.847146 1429296 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0131 02:19:26.847068 1429296 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0131 02:19:26.847173 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.847210 1429296 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0131 02:19:26.847215 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0131 02:19:26.847167 1429296 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0131 02:19:26.847248 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.847255 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.847264 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0131 02:19:26.847172 1429296 ssh_runner.go:195] Run: which crictl
	I0131 02:19:26.847299 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0131 02:19:26.908950 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0131 02:19:26.909001 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0131 02:19:26.935482 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0131 02:19:26.935514 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0131 02:19:26.935536 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0131 02:19:26.935549 1429296 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0131 02:19:26.935638 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0131 02:19:26.957799 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0131 02:19:26.994444 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0131 02:19:27.009376 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0131 02:19:27.009417 1429296 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0131 02:19:27.351257 1429296 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:19:27.487521 1429296 cache_images.go:92] LoadImages completed in 1.019898385s
	W0131 02:19:27.487636 1429296 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0131 02:19:27.487710 1429296 ssh_runner.go:195] Run: crio config
	I0131 02:19:27.541044 1429296 cni.go:84] Creating CNI manager for ""
	I0131 02:19:27.541072 1429296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:19:27.541094 1429296 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 02:19:27.541154 1429296 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.40 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-757160 NodeName:ingress-addon-legacy-757160 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 02:19:27.541280 1429296 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-757160"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 02:19:27.541349 1429296 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-757160 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-757160 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 02:19:27.541407 1429296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0131 02:19:27.550073 1429296 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 02:19:27.550144 1429296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 02:19:27.557833 1429296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0131 02:19:27.572411 1429296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0131 02:19:27.586826 1429296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0131 02:19:27.602289 1429296 ssh_runner.go:195] Run: grep 192.168.39.40	control-plane.minikube.internal$ /etc/hosts
	I0131 02:19:27.605817 1429296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:19:27.617350 1429296 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160 for IP: 192.168.39.40
	I0131 02:19:27.617389 1429296 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:27.617561 1429296 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 02:19:27.617600 1429296 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 02:19:27.617696 1429296 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.key
	I0131 02:19:27.617710 1429296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt with IP's: []
	I0131 02:19:27.894269 1429296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt ...
	I0131 02:19:27.894308 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: {Name:mke9a9cbc111c31f6d2513f1d943b2478b257dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:27.894546 1429296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.key ...
	I0131 02:19:27.894566 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.key: {Name:mk8d8d506391798a94d8ee86d7a9b4a90a0fb201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:27.894710 1429296 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key.7fcbe345
	I0131 02:19:27.894730 1429296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt.7fcbe345 with IP's: [192.168.39.40 10.96.0.1 127.0.0.1 10.0.0.1]
	I0131 02:19:28.023477 1429296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt.7fcbe345 ...
	I0131 02:19:28.023515 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt.7fcbe345: {Name:mk874a12dedd2f45c4a813c5239d35228fed6ff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:28.023808 1429296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key.7fcbe345 ...
	I0131 02:19:28.023845 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key.7fcbe345: {Name:mk728256559007c716136bd094c4186d08de6ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:28.024073 1429296 certs.go:337] copying /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt.7fcbe345 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt
	I0131 02:19:28.024194 1429296 certs.go:341] copying /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key.7fcbe345 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key
	I0131 02:19:28.024256 1429296 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.key
	I0131 02:19:28.024275 1429296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.crt with IP's: []
	I0131 02:19:28.160628 1429296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.crt ...
	I0131 02:19:28.160666 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.crt: {Name:mk2853d6b58d7c39058c38866054d03b24383fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:28.160893 1429296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.key ...
	I0131 02:19:28.160917 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.key: {Name:mkca9f3797e7483c2c12ff3d1e4c4ad27ec1c18c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:28.161026 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0131 02:19:28.161053 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0131 02:19:28.161064 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0131 02:19:28.161084 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0131 02:19:28.161097 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0131 02:19:28.161110 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0131 02:19:28.161129 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0131 02:19:28.161141 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0131 02:19:28.161199 1429296 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 02:19:28.161234 1429296 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 02:19:28.161245 1429296 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 02:19:28.161272 1429296 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 02:19:28.161294 1429296 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 02:19:28.161315 1429296 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 02:19:28.161354 1429296 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:19:28.161398 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:19:28.161424 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem -> /usr/share/ca-certificates/1419976.pem
	I0131 02:19:28.161435 1429296 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /usr/share/ca-certificates/14199762.pem
	I0131 02:19:28.162091 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 02:19:28.184215 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 02:19:28.205330 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 02:19:28.226281 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 02:19:28.249791 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 02:19:28.273134 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 02:19:28.295859 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 02:19:28.319774 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 02:19:28.343943 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 02:19:28.365908 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 02:19:28.389860 1429296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 02:19:28.412238 1429296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 02:19:28.427126 1429296 ssh_runner.go:195] Run: openssl version
	I0131 02:19:28.432445 1429296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 02:19:28.441572 1429296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:19:28.445804 1429296 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:19:28.445857 1429296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:19:28.450922 1429296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 02:19:28.459656 1429296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 02:19:28.468361 1429296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 02:19:28.472580 1429296 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:19:28.472641 1429296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 02:19:28.477604 1429296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 02:19:28.486702 1429296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 02:19:28.495840 1429296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 02:19:28.500411 1429296 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:19:28.500470 1429296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 02:19:28.505510 1429296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 02:19:28.514237 1429296 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 02:19:28.517792 1429296 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 02:19:28.517846 1429296 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-757160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-757160 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:19:28.517939 1429296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 02:19:28.517988 1429296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 02:19:28.555361 1429296 cri.go:89] found id: ""
	I0131 02:19:28.555440 1429296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 02:19:28.563917 1429296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 02:19:28.571598 1429296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 02:19:28.579260 1429296 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 02:19:28.579310 1429296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 02:19:28.630549 1429296 kubeadm.go:322] W0131 02:19:28.613586     963 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0131 02:19:28.758027 1429296 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 02:19:32.102701 1429296 kubeadm.go:322] W0131 02:19:32.088703     963 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0131 02:19:32.104029 1429296 kubeadm.go:322] W0131 02:19:32.090080     963 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0131 02:19:42.145683 1429296 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0131 02:19:42.145797 1429296 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 02:19:42.145926 1429296 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 02:19:42.146043 1429296 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 02:19:42.146199 1429296 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 02:19:42.146360 1429296 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 02:19:42.146474 1429296 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 02:19:42.146555 1429296 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 02:19:42.146637 1429296 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 02:19:42.148360 1429296 out.go:204]   - Generating certificates and keys ...
	I0131 02:19:42.148451 1429296 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 02:19:42.148549 1429296 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 02:19:42.148647 1429296 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0131 02:19:42.148748 1429296 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0131 02:19:42.148893 1429296 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0131 02:19:42.148978 1429296 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0131 02:19:42.149061 1429296 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0131 02:19:42.149242 1429296 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-757160 localhost] and IPs [192.168.39.40 127.0.0.1 ::1]
	I0131 02:19:42.149314 1429296 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0131 02:19:42.149489 1429296 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-757160 localhost] and IPs [192.168.39.40 127.0.0.1 ::1]
	I0131 02:19:42.149578 1429296 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0131 02:19:42.149658 1429296 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0131 02:19:42.149727 1429296 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0131 02:19:42.149818 1429296 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 02:19:42.149878 1429296 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 02:19:42.149958 1429296 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 02:19:42.150063 1429296 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 02:19:42.150145 1429296 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 02:19:42.150243 1429296 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 02:19:42.152046 1429296 out.go:204]   - Booting up control plane ...
	I0131 02:19:42.152131 1429296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 02:19:42.152213 1429296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 02:19:42.152337 1429296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 02:19:42.152436 1429296 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 02:19:42.152585 1429296 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 02:19:42.152648 1429296 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503122 seconds
	I0131 02:19:42.152749 1429296 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 02:19:42.152894 1429296 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 02:19:42.152990 1429296 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 02:19:42.153137 1429296 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-757160 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 02:19:42.153192 1429296 kubeadm.go:322] [bootstrap-token] Using token: dx80op.ai6vzu8y6d3bar7k
	I0131 02:19:42.154646 1429296 out.go:204]   - Configuring RBAC rules ...
	I0131 02:19:42.154737 1429296 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 02:19:42.154807 1429296 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 02:19:42.154938 1429296 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 02:19:42.155055 1429296 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 02:19:42.155158 1429296 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 02:19:42.155246 1429296 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 02:19:42.155338 1429296 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 02:19:42.155386 1429296 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 02:19:42.155426 1429296 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 02:19:42.155434 1429296 kubeadm.go:322] 
	I0131 02:19:42.155485 1429296 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 02:19:42.155492 1429296 kubeadm.go:322] 
	I0131 02:19:42.155552 1429296 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 02:19:42.155558 1429296 kubeadm.go:322] 
	I0131 02:19:42.155579 1429296 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 02:19:42.155624 1429296 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 02:19:42.155671 1429296 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 02:19:42.155678 1429296 kubeadm.go:322] 
	I0131 02:19:42.155724 1429296 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 02:19:42.155827 1429296 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 02:19:42.155895 1429296 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 02:19:42.155902 1429296 kubeadm.go:322] 
	I0131 02:19:42.155973 1429296 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 02:19:42.156057 1429296 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 02:19:42.156071 1429296 kubeadm.go:322] 
	I0131 02:19:42.156134 1429296 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dx80op.ai6vzu8y6d3bar7k \
	I0131 02:19:42.156231 1429296 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 02:19:42.156251 1429296 kubeadm.go:322]     --control-plane 
	I0131 02:19:42.156257 1429296 kubeadm.go:322] 
	I0131 02:19:42.156331 1429296 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 02:19:42.156338 1429296 kubeadm.go:322] 
	I0131 02:19:42.156396 1429296 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dx80op.ai6vzu8y6d3bar7k \
	I0131 02:19:42.156488 1429296 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 02:19:42.156499 1429296 cni.go:84] Creating CNI manager for ""
	I0131 02:19:42.156506 1429296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:19:42.159256 1429296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 02:19:42.160682 1429296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 02:19:42.170637 1429296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 02:19:42.187252 1429296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 02:19:42.187308 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:42.187378 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=ingress-addon-legacy-757160 minikube.k8s.io/updated_at=2024_01_31T02_19_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:42.539698 1429296 ops.go:34] apiserver oom_adj: -16
	I0131 02:19:42.539776 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:43.040003 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:43.540180 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:44.040440 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:44.540602 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:45.040369 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:45.540470 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:46.040213 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:46.540392 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:47.040860 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:47.540014 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:48.040665 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:48.540134 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:49.040429 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:49.540290 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:50.040422 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:50.540078 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:51.040585 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:51.540225 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:52.039871 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:52.540348 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:53.040648 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:53.540051 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:54.040461 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:54.540603 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:55.040410 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:55.540080 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:56.040458 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:56.540274 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:57.040044 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:57.539876 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:58.040590 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:58.539828 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:59.040157 1429296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:19:59.158680 1429296 kubeadm.go:1088] duration metric: took 16.971426534s to wait for elevateKubeSystemPrivileges.
	I0131 02:19:59.158769 1429296 kubeadm.go:406] StartCluster complete in 30.640925508s
	I0131 02:19:59.158802 1429296 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:59.158898 1429296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:19:59.159698 1429296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:19:59.159924 1429296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 02:19:59.159981 1429296 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 02:19:59.160071 1429296 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-757160"
	I0131 02:19:59.160098 1429296 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-757160"
	I0131 02:19:59.160096 1429296 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-757160"
	I0131 02:19:59.160185 1429296 config.go:182] Loaded profile config "ingress-addon-legacy-757160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0131 02:19:59.160206 1429296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-757160"
	I0131 02:19:59.160211 1429296 host.go:66] Checking if "ingress-addon-legacy-757160" exists ...
	I0131 02:19:59.160612 1429296 kapi.go:59] client config for ingress-addon-legacy-757160: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:
[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:19:59.160743 1429296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:19:59.160778 1429296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:19:59.160790 1429296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:19:59.160812 1429296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:19:59.161367 1429296 cert_rotation.go:137] Starting client certificate rotation controller
	I0131 02:19:59.177278 1429296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41207
	I0131 02:19:59.177333 1429296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44625
	I0131 02:19:59.177809 1429296 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:19:59.177871 1429296 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:19:59.178334 1429296 main.go:141] libmachine: Using API Version  1
	I0131 02:19:59.178365 1429296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:19:59.178382 1429296 main.go:141] libmachine: Using API Version  1
	I0131 02:19:59.178400 1429296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:19:59.178750 1429296 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:19:59.178763 1429296 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:19:59.178978 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetState
	I0131 02:19:59.179297 1429296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:19:59.179327 1429296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:19:59.181431 1429296 kapi.go:59] client config for ingress-addon-legacy-757160: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:
[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:19:59.181762 1429296 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-757160"
	I0131 02:19:59.181803 1429296 host.go:66] Checking if "ingress-addon-legacy-757160" exists ...
	I0131 02:19:59.182081 1429296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:19:59.182110 1429296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:19:59.195391 1429296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0131 02:19:59.195892 1429296 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:19:59.196512 1429296 main.go:141] libmachine: Using API Version  1
	I0131 02:19:59.196541 1429296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:19:59.196612 1429296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0131 02:19:59.196876 1429296 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:19:59.197024 1429296 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:19:59.197133 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetState
	I0131 02:19:59.197905 1429296 main.go:141] libmachine: Using API Version  1
	I0131 02:19:59.197932 1429296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:19:59.198275 1429296 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:19:59.198865 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:59.198895 1429296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:19:59.198925 1429296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:19:59.200778 1429296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:19:59.202413 1429296 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 02:19:59.202433 1429296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 02:19:59.202453 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:59.205567 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:59.206070 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:59.206102 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:59.206347 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:59.206549 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:59.206719 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:59.206885 1429296 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa Username:docker}
	I0131 02:19:59.215313 1429296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
	I0131 02:19:59.215859 1429296 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:19:59.216324 1429296 main.go:141] libmachine: Using API Version  1
	I0131 02:19:59.216351 1429296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:19:59.216758 1429296 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:19:59.216978 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetState
	I0131 02:19:59.218773 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .DriverName
	I0131 02:19:59.219041 1429296 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 02:19:59.219057 1429296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 02:19:59.219077 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHHostname
	I0131 02:19:59.221702 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:59.222133 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:30:ed", ip: ""} in network mk-ingress-addon-legacy-757160: {Iface:virbr1 ExpiryTime:2024-01-31 03:19:10 +0000 UTC Type:0 Mac:52:54:00:26:30:ed Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ingress-addon-legacy-757160 Clientid:01:52:54:00:26:30:ed}
	I0131 02:19:59.222163 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | domain ingress-addon-legacy-757160 has defined IP address 192.168.39.40 and MAC address 52:54:00:26:30:ed in network mk-ingress-addon-legacy-757160
	I0131 02:19:59.222377 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHPort
	I0131 02:19:59.222566 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHKeyPath
	I0131 02:19:59.222730 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .GetSSHUsername
	I0131 02:19:59.222888 1429296 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/ingress-addon-legacy-757160/id_rsa Username:docker}
	I0131 02:19:59.314613 1429296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 02:19:59.381070 1429296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 02:19:59.398095 1429296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 02:19:59.665249 1429296 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-757160" context rescaled to 1 replicas
	I0131 02:19:59.665300 1429296 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 02:19:59.667589 1429296 out.go:177] * Verifying Kubernetes components...
	I0131 02:19:59.669330 1429296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:19:59.762464 1429296 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 02:19:59.911895 1429296 main.go:141] libmachine: Making call to close driver server
	I0131 02:19:59.911942 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .Close
	I0131 02:19:59.912289 1429296 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:19:59.912317 1429296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:19:59.912331 1429296 main.go:141] libmachine: Making call to close driver server
	I0131 02:19:59.912343 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .Close
	I0131 02:19:59.912694 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Closing plugin on server side
	I0131 02:19:59.912716 1429296 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:19:59.912734 1429296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:19:59.922618 1429296 main.go:141] libmachine: Making call to close driver server
	I0131 02:19:59.922643 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .Close
	I0131 02:19:59.922926 1429296 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:19:59.922949 1429296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:20:00.032416 1429296 main.go:141] libmachine: Making call to close driver server
	I0131 02:20:00.032453 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .Close
	I0131 02:20:00.032812 1429296 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:20:00.032836 1429296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:20:00.032847 1429296 main.go:141] libmachine: Making call to close driver server
	I0131 02:20:00.032856 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) Calling .Close
	I0131 02:20:00.033318 1429296 kapi.go:59] client config for ingress-addon-legacy-757160: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:
[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:20:00.033602 1429296 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-757160" to be "Ready" ...
	I0131 02:20:00.033830 1429296 main.go:141] libmachine: (ingress-addon-legacy-757160) DBG | Closing plugin on server side
	I0131 02:20:00.033862 1429296 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:20:00.033886 1429296 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:20:00.036686 1429296 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0131 02:20:00.038031 1429296 addons.go:505] enable addons completed in 878.066106ms: enabled=[default-storageclass storage-provisioner]
	I0131 02:20:00.043290 1429296 node_ready.go:49] node "ingress-addon-legacy-757160" has status "Ready":"True"
	I0131 02:20:00.043313 1429296 node_ready.go:38] duration metric: took 9.68125ms waiting for node "ingress-addon-legacy-757160" to be "Ready" ...
	I0131 02:20:00.043323 1429296 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:20:00.063456 1429296 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8rglf" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:02.071352 1429296 pod_ready.go:102] pod "coredns-66bff467f8-8rglf" in "kube-system" namespace has status "Ready":"False"
	I0131 02:20:04.570771 1429296 pod_ready.go:102] pod "coredns-66bff467f8-8rglf" in "kube-system" namespace has status "Ready":"False"
	I0131 02:20:07.071475 1429296 pod_ready.go:102] pod "coredns-66bff467f8-8rglf" in "kube-system" namespace has status "Ready":"False"
	I0131 02:20:09.571805 1429296 pod_ready.go:102] pod "coredns-66bff467f8-8rglf" in "kube-system" namespace has status "Ready":"False"
	I0131 02:20:12.070663 1429296 pod_ready.go:102] pod "coredns-66bff467f8-8rglf" in "kube-system" namespace has status "Ready":"False"
	I0131 02:20:12.596651 1429296 pod_ready.go:92] pod "coredns-66bff467f8-8rglf" in "kube-system" namespace has status "Ready":"True"
	I0131 02:20:12.596680 1429296 pod_ready.go:81] duration metric: took 12.533191046s waiting for pod "coredns-66bff467f8-8rglf" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.596694 1429296 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.617570 1429296 pod_ready.go:92] pod "etcd-ingress-addon-legacy-757160" in "kube-system" namespace has status "Ready":"True"
	I0131 02:20:12.617612 1429296 pod_ready.go:81] duration metric: took 20.908228ms waiting for pod "etcd-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.617628 1429296 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.625679 1429296 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-757160" in "kube-system" namespace has status "Ready":"True"
	I0131 02:20:12.625703 1429296 pod_ready.go:81] duration metric: took 8.065886ms waiting for pod "kube-apiserver-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.625715 1429296 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.631537 1429296 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-757160" in "kube-system" namespace has status "Ready":"True"
	I0131 02:20:12.631557 1429296 pod_ready.go:81] duration metric: took 5.835833ms waiting for pod "kube-controller-manager-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.631566 1429296 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hlmjz" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.637056 1429296 pod_ready.go:92] pod "kube-proxy-hlmjz" in "kube-system" namespace has status "Ready":"True"
	I0131 02:20:12.637073 1429296 pod_ready.go:81] duration metric: took 5.501827ms waiting for pod "kube-proxy-hlmjz" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.637090 1429296 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.764548 1429296 request.go:629] Waited for 127.343842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-757160
	I0131 02:20:12.964471 1429296 request.go:629] Waited for 196.431739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes/ingress-addon-legacy-757160
	I0131 02:20:12.968056 1429296 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-757160" in "kube-system" namespace has status "Ready":"True"
	I0131 02:20:12.968081 1429296 pod_ready.go:81] duration metric: took 330.984675ms waiting for pod "kube-scheduler-ingress-addon-legacy-757160" in "kube-system" namespace to be "Ready" ...
	I0131 02:20:12.968093 1429296 pod_ready.go:38] duration metric: took 12.924759563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:20:12.968111 1429296 api_server.go:52] waiting for apiserver process to appear ...
	I0131 02:20:12.968187 1429296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:20:12.980782 1429296 api_server.go:72] duration metric: took 13.315416329s to wait for apiserver process to appear ...
	I0131 02:20:12.980811 1429296 api_server.go:88] waiting for apiserver healthz status ...
	I0131 02:20:12.980836 1429296 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0131 02:20:12.986542 1429296 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I0131 02:20:12.987406 1429296 api_server.go:141] control plane version: v1.18.20
	I0131 02:20:12.987430 1429296 api_server.go:131] duration metric: took 6.611792ms to wait for apiserver health ...
	I0131 02:20:12.987438 1429296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 02:20:13.163790 1429296 request.go:629] Waited for 176.27254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I0131 02:20:13.169961 1429296 system_pods.go:59] 7 kube-system pods found
	I0131 02:20:13.169995 1429296 system_pods.go:61] "coredns-66bff467f8-8rglf" [b7d6a661-919f-42ca-a7bb-b2aa94a89777] Running
	I0131 02:20:13.170000 1429296 system_pods.go:61] "etcd-ingress-addon-legacy-757160" [7a16347b-8a88-4eb9-8841-fcc9b2407f0b] Running
	I0131 02:20:13.170004 1429296 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-757160" [c8a6ac10-9b79-4907-a129-fc466093cbe5] Running
	I0131 02:20:13.170009 1429296 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-757160" [f62d9547-6c18-430e-b435-d88dcce2eb31] Running
	I0131 02:20:13.170012 1429296 system_pods.go:61] "kube-proxy-hlmjz" [cae0f9db-f100-49bd-9eb3-66650a10b591] Running
	I0131 02:20:13.170016 1429296 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-757160" [9658acd0-9b06-4a74-96bc-dce992a84717] Running
	I0131 02:20:13.170020 1429296 system_pods.go:61] "storage-provisioner" [2cbbf3df-14bb-4374-8237-ab3caf7f419b] Running
	I0131 02:20:13.170037 1429296 system_pods.go:74] duration metric: took 182.592528ms to wait for pod list to return data ...
	I0131 02:20:13.170045 1429296 default_sa.go:34] waiting for default service account to be created ...
	I0131 02:20:13.364535 1429296 request.go:629] Waited for 194.376175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/default/serviceaccounts
	I0131 02:20:13.367580 1429296 default_sa.go:45] found service account: "default"
	I0131 02:20:13.367607 1429296 default_sa.go:55] duration metric: took 197.556935ms for default service account to be created ...
	I0131 02:20:13.367617 1429296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 02:20:13.564759 1429296 request.go:629] Waited for 197.054621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/namespaces/kube-system/pods
	I0131 02:20:13.571368 1429296 system_pods.go:86] 7 kube-system pods found
	I0131 02:20:13.571404 1429296 system_pods.go:89] "coredns-66bff467f8-8rglf" [b7d6a661-919f-42ca-a7bb-b2aa94a89777] Running
	I0131 02:20:13.571413 1429296 system_pods.go:89] "etcd-ingress-addon-legacy-757160" [7a16347b-8a88-4eb9-8841-fcc9b2407f0b] Running
	I0131 02:20:13.571420 1429296 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-757160" [c8a6ac10-9b79-4907-a129-fc466093cbe5] Running
	I0131 02:20:13.571426 1429296 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-757160" [f62d9547-6c18-430e-b435-d88dcce2eb31] Running
	I0131 02:20:13.571430 1429296 system_pods.go:89] "kube-proxy-hlmjz" [cae0f9db-f100-49bd-9eb3-66650a10b591] Running
	I0131 02:20:13.571434 1429296 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-757160" [9658acd0-9b06-4a74-96bc-dce992a84717] Running
	I0131 02:20:13.571440 1429296 system_pods.go:89] "storage-provisioner" [2cbbf3df-14bb-4374-8237-ab3caf7f419b] Running
	I0131 02:20:13.571449 1429296 system_pods.go:126] duration metric: took 203.825644ms to wait for k8s-apps to be running ...
	I0131 02:20:13.571466 1429296 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 02:20:13.571530 1429296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:20:13.586309 1429296 system_svc.go:56] duration metric: took 14.829158ms WaitForService to wait for kubelet.
	I0131 02:20:13.586342 1429296 kubeadm.go:581] duration metric: took 13.920993094s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 02:20:13.586364 1429296 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:20:13.763780 1429296 request.go:629] Waited for 177.299945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.40:8443/api/v1/nodes
	I0131 02:20:13.767205 1429296 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:20:13.767247 1429296 node_conditions.go:123] node cpu capacity is 2
	I0131 02:20:13.767265 1429296 node_conditions.go:105] duration metric: took 180.891528ms to run NodePressure ...
	I0131 02:20:13.767280 1429296 start.go:228] waiting for startup goroutines ...
	I0131 02:20:13.767290 1429296 start.go:233] waiting for cluster config update ...
	I0131 02:20:13.767316 1429296 start.go:242] writing updated cluster config ...
	I0131 02:20:13.767742 1429296 ssh_runner.go:195] Run: rm -f paused
	I0131 02:20:13.817989 1429296 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0131 02:20:13.820030 1429296 out.go:177] 
	W0131 02:20:13.821528 1429296 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0131 02:20:13.823092 1429296 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0131 02:20:13.824624 1429296 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-757160" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 02:19:07 UTC, ends at Wed 2024-01-31 02:23:20 UTC. --
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.783130551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667800783118182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203655,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=e5d447d4-3055-4dc4-aafd-648d44992eec name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.783697501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb57b102-da1a-4900-8309-34fce0668fb3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.783770348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb57b102-da1a-4900-8309-34fce0668fb3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.784031866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c36be79cbbe49bf7a85951669f7dd5a1bd4878a215abd3258b9a3a8b586fd713,PodSandboxId:70d525ac45b0cf53e52098c1ce8597596e7f168785f2b9587ebff8c6427e58d7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667786096842348,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wxckq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fed9da6c-de56-4991-b0bb-8d3b98e0d23b,},Annotations:map[string]string{io.kubernetes.container.hash: b5bd7829,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a8d92e5fab816426938191bd58002bcb2c7fe1fc02e2cc9e982730994c1955,PodSandboxId:28aa973a71c59a5855b4499c372b39192707f068cd3a357645a0c5e90fad03e1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706667644673270615,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f68f24-e339-478e-9821-f76c8e02fee9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 854e17c5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90b17e441fbbe05938ec1eaaa4302c69b9adad560fdf20588606b47db219fe4,PodSandboxId:92405e1a97388b2f05b5f1429d2a1ae593d2dea631c24ab3af73673fc4a848b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706667629375970847,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-p2pd2,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff6c613b-6e12-4447-b356-4af1483f3a82,},Annotations:map[string]string{io.kubernetes.container.hash: 9092021,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b4f82ceb990862857fa83b0e422df6fbd5bec9b34d3e9ed865f046a4af12648e,PodSandboxId:b1a55b49d888e6b8040ed0d0703cf30f6ea5dd86dd992aeff618c18f08e6099b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667619950844633,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qz5pg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5,},Annotations:map[string]string{io.kubernetes.container.hash: b89cb740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c72876aec15fb80f6445c9b9604ee8be22b7a8a52bd7770fa4eba6f44a3127d,PodSandboxId:ad5df6a5c0405de4798d546b5d3f0e405bbbd17e4701addd9fe640ffd0e1b0b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667618898075691,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x56rr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0839cdc-b043-49bf-864b-ff93460de1ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbc5920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98a422bd73de4270fef97f1bf41a440a0b94c53d44ff69c8ad6be9b09b31dc3,PodSandboxId:6a7cc9f39b54757052f7c058a1a5212279e318b2cf25a818aa89039a1ce947c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706667601776324441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7d6a661-919f-42ca-a7bb-b2aa94a89777,},Annotations:map[string]string{io.kubernetes.container.hash: f77f40ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79542be60c1f1354e9d28cf6c21d
fbbbe92c8cfd5ba2018b1303ae4c27c73739,PodSandboxId:87ad27a7c2cb9d22fcc425936b88e645c8777dabc7a413f00206ec2b057c8894,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706667600692613274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cbbf3df-14bb-4374-8237-ab3caf7f419b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f4f192f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a1ce4d8b6f59ed40341a5e230e
f0ceb3c0b90f6b89c9656434cad3be7004b,PodSandboxId:a35df1e398ebeb837423103cad2307aa15a5b77a205e6e9704b2a52b9053f82b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706667599886294314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae0f9db-f100-49bd-9eb3-66650a10b591,},Annotations:map[string]string{io.kubernetes.container.hash: a70194e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46fcf9e5990b3d30df6c19f6053acb3f2eb2e8eb067e488ad384a17daba985f,PodS
andboxId:de1eeee50f8cecf2890457b5f4aaf07f88569a8d553ef3353328fd3da574eb20,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706667575097341822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b20d002df019cea327c0a381617edab,},Annotations:map[string]string{io.kubernetes.container.hash: 358a69ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caeeb141c9cf6ccbd03bc5d538a7de25e3634470ba63a4ae660eb72d33f6d0c,PodSandboxId:50153866082cbed297474d2872270652200e1
5d70f070deb338fe74ae31082e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706667574295347788,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0319c1c5cace348fe59e8e57cfa53027c02a884ef70eaa172a516762cc3d96,PodSandboxId:ecd5f32532bbe7526d1eb76d87e99c6c9765dc7cdc2
5847cd8a0d1faf36ddb2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706667573847965942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9303d3af71ec0bf560f5ba5776e28c067d9ed3cff250c33f689d1762ad78cd,PodSandboxId:3280267e21ae2
0ec274d8fc2c7113f90ad8fd60e7c267f8e71d7de411a097113,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706667573867583067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fd9583b533f9ea37d271dca6c5bf16,},Annotations:map[string]string{io.kubernetes.container.hash: 68f66fbb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb57b102-da1a-4900-8309-34fce0668fb3 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.821180641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c8a208ac-f194-4503-9141-2831a754b076 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.821266782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c8a208ac-f194-4503-9141-2831a754b076 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.822412722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=93af432a-448b-4edd-9ecb-50a3253f9b24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.822914100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667800822901866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203655,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=93af432a-448b-4edd-9ecb-50a3253f9b24 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.823637143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed513c2e-72e6-44f9-b038-5596c5a2c738 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.823707487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed513c2e-72e6-44f9-b038-5596c5a2c738 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.823983301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c36be79cbbe49bf7a85951669f7dd5a1bd4878a215abd3258b9a3a8b586fd713,PodSandboxId:70d525ac45b0cf53e52098c1ce8597596e7f168785f2b9587ebff8c6427e58d7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667786096842348,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wxckq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fed9da6c-de56-4991-b0bb-8d3b98e0d23b,},Annotations:map[string]string{io.kubernetes.container.hash: b5bd7829,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a8d92e5fab816426938191bd58002bcb2c7fe1fc02e2cc9e982730994c1955,PodSandboxId:28aa973a71c59a5855b4499c372b39192707f068cd3a357645a0c5e90fad03e1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706667644673270615,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f68f24-e339-478e-9821-f76c8e02fee9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 854e17c5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90b17e441fbbe05938ec1eaaa4302c69b9adad560fdf20588606b47db219fe4,PodSandboxId:92405e1a97388b2f05b5f1429d2a1ae593d2dea631c24ab3af73673fc4a848b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706667629375970847,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-p2pd2,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff6c613b-6e12-4447-b356-4af1483f3a82,},Annotations:map[string]string{io.kubernetes.container.hash: 9092021,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b4f82ceb990862857fa83b0e422df6fbd5bec9b34d3e9ed865f046a4af12648e,PodSandboxId:b1a55b49d888e6b8040ed0d0703cf30f6ea5dd86dd992aeff618c18f08e6099b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667619950844633,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qz5pg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5,},Annotations:map[string]string{io.kubernetes.container.hash: b89cb740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c72876aec15fb80f6445c9b9604ee8be22b7a8a52bd7770fa4eba6f44a3127d,PodSandboxId:ad5df6a5c0405de4798d546b5d3f0e405bbbd17e4701addd9fe640ffd0e1b0b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667618898075691,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x56rr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0839cdc-b043-49bf-864b-ff93460de1ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbc5920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98a422bd73de4270fef97f1bf41a440a0b94c53d44ff69c8ad6be9b09b31dc3,PodSandboxId:6a7cc9f39b54757052f7c058a1a5212279e318b2cf25a818aa89039a1ce947c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706667601776324441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7d6a661-919f-42ca-a7bb-b2aa94a89777,},Annotations:map[string]string{io.kubernetes.container.hash: f77f40ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79542be60c1f1354e9d28cf6c21d
fbbbe92c8cfd5ba2018b1303ae4c27c73739,PodSandboxId:87ad27a7c2cb9d22fcc425936b88e645c8777dabc7a413f00206ec2b057c8894,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706667600692613274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cbbf3df-14bb-4374-8237-ab3caf7f419b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f4f192f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a1ce4d8b6f59ed40341a5e230e
f0ceb3c0b90f6b89c9656434cad3be7004b,PodSandboxId:a35df1e398ebeb837423103cad2307aa15a5b77a205e6e9704b2a52b9053f82b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706667599886294314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae0f9db-f100-49bd-9eb3-66650a10b591,},Annotations:map[string]string{io.kubernetes.container.hash: a70194e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46fcf9e5990b3d30df6c19f6053acb3f2eb2e8eb067e488ad384a17daba985f,PodS
andboxId:de1eeee50f8cecf2890457b5f4aaf07f88569a8d553ef3353328fd3da574eb20,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706667575097341822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b20d002df019cea327c0a381617edab,},Annotations:map[string]string{io.kubernetes.container.hash: 358a69ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caeeb141c9cf6ccbd03bc5d538a7de25e3634470ba63a4ae660eb72d33f6d0c,PodSandboxId:50153866082cbed297474d2872270652200e1
5d70f070deb338fe74ae31082e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706667574295347788,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0319c1c5cace348fe59e8e57cfa53027c02a884ef70eaa172a516762cc3d96,PodSandboxId:ecd5f32532bbe7526d1eb76d87e99c6c9765dc7cdc2
5847cd8a0d1faf36ddb2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706667573847965942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9303d3af71ec0bf560f5ba5776e28c067d9ed3cff250c33f689d1762ad78cd,PodSandboxId:3280267e21ae2
0ec274d8fc2c7113f90ad8fd60e7c267f8e71d7de411a097113,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706667573867583067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fd9583b533f9ea37d271dca6c5bf16,},Annotations:map[string]string{io.kubernetes.container.hash: 68f66fbb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed513c2e-72e6-44f9-b038-5596c5a2c738 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.860948406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9ab5a31b-bfa4-4131-9ab4-0a37b3698e5e name=/runtime.v1.RuntimeService/Version
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.861030788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9ab5a31b-bfa4-4131-9ab4-0a37b3698e5e name=/runtime.v1.RuntimeService/Version
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.865010669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8933f0ad-dffd-421d-a4de-d9a25f453f3d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.865593492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667800865574386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203655,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=8933f0ad-dffd-421d-a4de-d9a25f453f3d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.866181993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b4a446da-ea63-4562-b3dd-2b8163dbcfef name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.866244323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b4a446da-ea63-4562-b3dd-2b8163dbcfef name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.866562331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c36be79cbbe49bf7a85951669f7dd5a1bd4878a215abd3258b9a3a8b586fd713,PodSandboxId:70d525ac45b0cf53e52098c1ce8597596e7f168785f2b9587ebff8c6427e58d7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667786096842348,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wxckq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fed9da6c-de56-4991-b0bb-8d3b98e0d23b,},Annotations:map[string]string{io.kubernetes.container.hash: b5bd7829,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a8d92e5fab816426938191bd58002bcb2c7fe1fc02e2cc9e982730994c1955,PodSandboxId:28aa973a71c59a5855b4499c372b39192707f068cd3a357645a0c5e90fad03e1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706667644673270615,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f68f24-e339-478e-9821-f76c8e02fee9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 854e17c5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90b17e441fbbe05938ec1eaaa4302c69b9adad560fdf20588606b47db219fe4,PodSandboxId:92405e1a97388b2f05b5f1429d2a1ae593d2dea631c24ab3af73673fc4a848b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706667629375970847,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-p2pd2,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff6c613b-6e12-4447-b356-4af1483f3a82,},Annotations:map[string]string{io.kubernetes.container.hash: 9092021,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b4f82ceb990862857fa83b0e422df6fbd5bec9b34d3e9ed865f046a4af12648e,PodSandboxId:b1a55b49d888e6b8040ed0d0703cf30f6ea5dd86dd992aeff618c18f08e6099b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667619950844633,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qz5pg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5,},Annotations:map[string]string{io.kubernetes.container.hash: b89cb740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c72876aec15fb80f6445c9b9604ee8be22b7a8a52bd7770fa4eba6f44a3127d,PodSandboxId:ad5df6a5c0405de4798d546b5d3f0e405bbbd17e4701addd9fe640ffd0e1b0b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667618898075691,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x56rr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0839cdc-b043-49bf-864b-ff93460de1ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbc5920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98a422bd73de4270fef97f1bf41a440a0b94c53d44ff69c8ad6be9b09b31dc3,PodSandboxId:6a7cc9f39b54757052f7c058a1a5212279e318b2cf25a818aa89039a1ce947c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706667601776324441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7d6a661-919f-42ca-a7bb-b2aa94a89777,},Annotations:map[string]string{io.kubernetes.container.hash: f77f40ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79542be60c1f1354e9d28cf6c21d
fbbbe92c8cfd5ba2018b1303ae4c27c73739,PodSandboxId:87ad27a7c2cb9d22fcc425936b88e645c8777dabc7a413f00206ec2b057c8894,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706667600692613274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cbbf3df-14bb-4374-8237-ab3caf7f419b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f4f192f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a1ce4d8b6f59ed40341a5e230e
f0ceb3c0b90f6b89c9656434cad3be7004b,PodSandboxId:a35df1e398ebeb837423103cad2307aa15a5b77a205e6e9704b2a52b9053f82b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706667599886294314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae0f9db-f100-49bd-9eb3-66650a10b591,},Annotations:map[string]string{io.kubernetes.container.hash: a70194e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46fcf9e5990b3d30df6c19f6053acb3f2eb2e8eb067e488ad384a17daba985f,PodS
andboxId:de1eeee50f8cecf2890457b5f4aaf07f88569a8d553ef3353328fd3da574eb20,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706667575097341822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b20d002df019cea327c0a381617edab,},Annotations:map[string]string{io.kubernetes.container.hash: 358a69ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caeeb141c9cf6ccbd03bc5d538a7de25e3634470ba63a4ae660eb72d33f6d0c,PodSandboxId:50153866082cbed297474d2872270652200e1
5d70f070deb338fe74ae31082e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706667574295347788,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0319c1c5cace348fe59e8e57cfa53027c02a884ef70eaa172a516762cc3d96,PodSandboxId:ecd5f32532bbe7526d1eb76d87e99c6c9765dc7cdc2
5847cd8a0d1faf36ddb2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706667573847965942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9303d3af71ec0bf560f5ba5776e28c067d9ed3cff250c33f689d1762ad78cd,PodSandboxId:3280267e21ae2
0ec274d8fc2c7113f90ad8fd60e7c267f8e71d7de411a097113,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706667573867583067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fd9583b533f9ea37d271dca6c5bf16,},Annotations:map[string]string{io.kubernetes.container.hash: 68f66fbb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b4a446da-ea63-4562-b3dd-2b8163dbcfef name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.902204762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c9205841-3d05-4fba-a95d-4bcadc464e51 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.902289323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c9205841-3d05-4fba-a95d-4bcadc464e51 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.903537492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9ecf979e-d92c-4589-9702-69aa997fba8d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.904041584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706667800904027025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203655,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9ecf979e-d92c-4589-9702-69aa997fba8d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.904673613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8143f134-d405-47a6-8772-41b72b473e86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.904727411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8143f134-d405-47a6-8772-41b72b473e86 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:23:20 ingress-addon-legacy-757160 crio[723]: time="2024-01-31 02:23:20.904964279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c36be79cbbe49bf7a85951669f7dd5a1bd4878a215abd3258b9a3a8b586fd713,PodSandboxId:70d525ac45b0cf53e52098c1ce8597596e7f168785f2b9587ebff8c6427e58d7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706667786096842348,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-wxckq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fed9da6c-de56-4991-b0bb-8d3b98e0d23b,},Annotations:map[string]string{io.kubernetes.container.hash: b5bd7829,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87a8d92e5fab816426938191bd58002bcb2c7fe1fc02e2cc9e982730994c1955,PodSandboxId:28aa973a71c59a5855b4499c372b39192707f068cd3a357645a0c5e90fad03e1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da,State:CONTAINER_RUNNING,CreatedAt:1706667644673270615,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 77f68f24-e339-478e-9821-f76c8e02fee9,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 854e17c5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a90b17e441fbbe05938ec1eaaa4302c69b9adad560fdf20588606b47db219fe4,PodSandboxId:92405e1a97388b2f05b5f1429d2a1ae593d2dea631c24ab3af73673fc4a848b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706667629375970847,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-p2pd2,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ff6c613b-6e12-4447-b356-4af1483f3a82,},Annotations:map[string]string{io.kubernetes.container.hash: 9092021,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b4f82ceb990862857fa83b0e422df6fbd5bec9b34d3e9ed865f046a4af12648e,PodSandboxId:b1a55b49d888e6b8040ed0d0703cf30f6ea5dd86dd992aeff618c18f08e6099b,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667619950844633,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qz5pg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5,},Annotations:map[string]string{io.kubernetes.container.hash: b89cb740,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c72876aec15fb80f6445c9b9604ee8be22b7a8a52bd7770fa4eba6f44a3127d,PodSandboxId:ad5df6a5c0405de4798d546b5d3f0e405bbbd17e4701addd9fe640ffd0e1b0b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706667618898075691,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x56rr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0839cdc-b043-49bf-864b-ff93460de1ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbc5920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f98a422bd73de4270fef97f1bf41a440a0b94c53d44ff69c8ad6be9b09b31dc3,PodSandboxId:6a7cc9f39b54757052f7c058a1a5212279e318b2cf25a818aa89039a1ce947c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706667601776324441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-8rglf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7d6a661-919f-42ca-a7bb-b2aa94a89777,},Annotations:map[string]string{io.kubernetes.container.hash: f77f40ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79542be60c1f1354e9d28cf6c21d
fbbbe92c8cfd5ba2018b1303ae4c27c73739,PodSandboxId:87ad27a7c2cb9d22fcc425936b88e645c8777dabc7a413f00206ec2b057c8894,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706667600692613274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cbbf3df-14bb-4374-8237-ab3caf7f419b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f4f192f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872a1ce4d8b6f59ed40341a5e230e
f0ceb3c0b90f6b89c9656434cad3be7004b,PodSandboxId:a35df1e398ebeb837423103cad2307aa15a5b77a205e6e9704b2a52b9053f82b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706667599886294314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cae0f9db-f100-49bd-9eb3-66650a10b591,},Annotations:map[string]string{io.kubernetes.container.hash: a70194e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b46fcf9e5990b3d30df6c19f6053acb3f2eb2e8eb067e488ad384a17daba985f,PodS
andboxId:de1eeee50f8cecf2890457b5f4aaf07f88569a8d553ef3353328fd3da574eb20,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706667575097341822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b20d002df019cea327c0a381617edab,},Annotations:map[string]string{io.kubernetes.container.hash: 358a69ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6caeeb141c9cf6ccbd03bc5d538a7de25e3634470ba63a4ae660eb72d33f6d0c,PodSandboxId:50153866082cbed297474d2872270652200e1
5d70f070deb338fe74ae31082e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706667574295347788,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0319c1c5cace348fe59e8e57cfa53027c02a884ef70eaa172a516762cc3d96,PodSandboxId:ecd5f32532bbe7526d1eb76d87e99c6c9765dc7cdc2
5847cd8a0d1faf36ddb2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706667573847965942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9303d3af71ec0bf560f5ba5776e28c067d9ed3cff250c33f689d1762ad78cd,PodSandboxId:3280267e21ae2
0ec274d8fc2c7113f90ad8fd60e7c267f8e71d7de411a097113,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706667573867583067,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-757160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30fd9583b533f9ea37d271dca6c5bf16,},Annotations:map[string]string{io.kubernetes.container.hash: 68f66fbb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8143f134-d405-47a6-8772-41b72b473e86 name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c36be79cbbe49       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            14 seconds ago      Running             hello-world-app           0                   70d525ac45b0c       hello-world-app-5f5d8b66bb-wxckq
	87a8d92e5fab8       docker.io/library/nginx@sha256:156d75f07c59b2fd59d3d1470631777943bb574135214f0a90c7bb82bde916da                    2 minutes ago       Running             nginx                     0                   28aa973a71c59       nginx
	a90b17e441fbb       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   92405e1a97388       ingress-nginx-controller-7fcf777cb7-p2pd2
	b4f82ceb99086       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   b1a55b49d888e       ingress-nginx-admission-patch-qz5pg
	6c72876aec15f       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   ad5df6a5c0405       ingress-nginx-admission-create-x56rr
	f98a422bd73de       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   6a7cc9f39b547       coredns-66bff467f8-8rglf
	79542be60c1f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   87ad27a7c2cb9       storage-provisioner
	872a1ce4d8b6f       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   a35df1e398ebe       kube-proxy-hlmjz
	b46fcf9e5990b       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   de1eeee50f8ce       etcd-ingress-addon-legacy-757160
	6caeeb141c9cf       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   50153866082cb       kube-scheduler-ingress-addon-legacy-757160
	2a9303d3af71e       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   3280267e21ae2       kube-apiserver-ingress-addon-legacy-757160
	3e0319c1c5cac       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   ecd5f32532bbe       kube-controller-manager-ingress-addon-legacy-757160
	
	
	==> coredns [f98a422bd73de4270fef97f1bf41a440a0b94c53d44ff69c8ad6be9b09b31dc3] <==
	[INFO] 10.244.0.5:54587 - 13652 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00009493s
	[INFO] 10.244.0.5:33993 - 11150 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069806s
	[INFO] 10.244.0.5:54587 - 53326 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077016s
	[INFO] 10.244.0.5:33993 - 45837 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073283s
	[INFO] 10.244.0.5:33993 - 47631 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057428s
	[INFO] 10.244.0.5:54587 - 37870 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088244s
	[INFO] 10.244.0.5:33993 - 27659 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029209s
	[INFO] 10.244.0.5:54587 - 36398 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064821s
	[INFO] 10.244.0.5:33993 - 3820 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097078s
	[INFO] 10.244.0.5:54587 - 62195 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126359s
	[INFO] 10.244.0.5:33993 - 52995 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049794s
	[INFO] 10.244.0.5:58856 - 32784 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082834s
	[INFO] 10.244.0.5:56708 - 52126 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039773s
	[INFO] 10.244.0.5:58856 - 19827 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003937s
	[INFO] 10.244.0.5:56708 - 27164 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033857s
	[INFO] 10.244.0.5:56708 - 14842 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000114823s
	[INFO] 10.244.0.5:58856 - 51361 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031623s
	[INFO] 10.244.0.5:56708 - 15770 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031645s
	[INFO] 10.244.0.5:58856 - 50614 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023618s
	[INFO] 10.244.0.5:58856 - 55599 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024492s
	[INFO] 10.244.0.5:56708 - 27665 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000019844s
	[INFO] 10.244.0.5:58856 - 35318 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030848s
	[INFO] 10.244.0.5:56708 - 62199 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031919s
	[INFO] 10.244.0.5:56708 - 60937 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000031025s
	[INFO] 10.244.0.5:58856 - 14451 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000021777s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-757160
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-757160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=ingress-addon-legacy-757160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T02_19_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 02:19:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-757160
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 02:23:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 02:23:12 +0000   Wed, 31 Jan 2024 02:19:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 02:23:12 +0000   Wed, 31 Jan 2024 02:19:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 02:23:12 +0000   Wed, 31 Jan 2024 02:19:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 02:23:12 +0000   Wed, 31 Jan 2024 02:19:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ingress-addon-legacy-757160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 1b36cdf9fa5c4aea94b698025ff59423
	  System UUID:                1b36cdf9-fa5c-4aea-94b6-98025ff59423
	  Boot ID:                    8f3773fb-8bb8-4def-9ebc-25224d6bf76d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-wxckq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 coredns-66bff467f8-8rglf                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m23s
	  kube-system                 etcd-ingress-addon-legacy-757160                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-757160             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-757160    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-proxy-hlmjz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 kube-scheduler-ingress-addon-legacy-757160             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m49s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-757160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-757160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x3 over 3m49s)  kubelet     Node ingress-addon-legacy-757160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s                  kubelet     Node ingress-addon-legacy-757160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s                  kubelet     Node ingress-addon-legacy-757160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s                  kubelet     Node ingress-addon-legacy-757160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m29s                  kubelet     Node ingress-addon-legacy-757160 status is now: NodeReady
	  Normal  Starting                 3m21s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan31 02:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.085308] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan31 02:19] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.810946] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.132131] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.986554] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.649858] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.104881] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.151734] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.110943] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.214272] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +7.814144] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +3.451929] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.357908] systemd-fstab-generator[1429]: Ignoring "noauto" for root device
	[ +18.338605] kauditd_printk_skb: 6 callbacks suppressed
	[Jan31 02:20] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.040869] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.588189] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.848078] kauditd_printk_skb: 3 callbacks suppressed
	[Jan31 02:23] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [b46fcf9e5990b3d30df6c19f6053acb3f2eb2e8eb067e488ad384a17daba985f] <==
	raft2024/01/31 02:19:35 INFO: 1088a855a4aa8d0a switched to configuration voters=(1191387187227823370)
	2024-01-31 02:19:35.242417 W | auth: simple token is not cryptographically signed
	2024-01-31 02:19:35.246894 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-31 02:19:35.250508 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-31 02:19:35.250839 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-31 02:19:35.250923 I | embed: listening for peers on 192.168.39.40:2380
	2024-01-31 02:19:35.250977 I | etcdserver: 1088a855a4aa8d0a as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/31 02:19:35 INFO: 1088a855a4aa8d0a switched to configuration voters=(1191387187227823370)
	2024-01-31 02:19:35.251517 I | etcdserver/membership: added member 1088a855a4aa8d0a [https://192.168.39.40:2380] to cluster ca485a4cd00ef8c5
	raft2024/01/31 02:19:35 INFO: 1088a855a4aa8d0a is starting a new election at term 1
	raft2024/01/31 02:19:35 INFO: 1088a855a4aa8d0a became candidate at term 2
	raft2024/01/31 02:19:35 INFO: 1088a855a4aa8d0a received MsgVoteResp from 1088a855a4aa8d0a at term 2
	raft2024/01/31 02:19:35 INFO: 1088a855a4aa8d0a became leader at term 2
	raft2024/01/31 02:19:35 INFO: raft.node: 1088a855a4aa8d0a elected leader 1088a855a4aa8d0a at term 2
	2024-01-31 02:19:35.731953 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-31 02:19:35.733333 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-31 02:19:35.733410 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-31 02:19:35.733492 I | etcdserver: published {Name:ingress-addon-legacy-757160 ClientURLs:[https://192.168.39.40:2379]} to cluster ca485a4cd00ef8c5
	2024-01-31 02:19:35.733596 I | embed: ready to serve client requests
	2024-01-31 02:19:35.733624 I | embed: ready to serve client requests
	2024-01-31 02:19:35.734884 I | embed: serving client requests on 192.168.39.40:2379
	2024-01-31 02:19:35.734985 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-31 02:19:58.288205 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replication-controller\" " with result "range_response_count:1 size:212" took too long (476.577029ms) to execute
	2024-01-31 02:19:58.288387 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (174.817154ms) to execute
	2024-01-31 02:20:35.564763 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (246.517447ms) to execute
	
	
	==> kernel <==
	 02:23:21 up 4 min,  0 users,  load average: 0.36, 0.47, 0.22
	Linux ingress-addon-legacy-757160 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2a9303d3af71ec0bf560f5ba5776e28c067d9ed3cff250c33f689d1762ad78cd] <==
	E0131 02:19:38.688614       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.40, ResourceVersion: 0, AdditionalErrorMsg: 
	I0131 02:19:38.727581       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0131 02:19:38.729636       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0131 02:19:38.730653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0131 02:19:38.730701       1 cache.go:39] Caches are synced for autoregister controller
	I0131 02:19:38.736891       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0131 02:19:39.625552       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0131 02:19:39.625637       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0131 02:19:39.632747       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0131 02:19:39.646522       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0131 02:19:39.646557       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0131 02:19:40.159893       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0131 02:19:40.204317       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0131 02:19:40.370358       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.40]
	I0131 02:19:40.371305       1 controller.go:609] quota admission added evaluator for: endpoints
	I0131 02:19:40.377641       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0131 02:19:40.972337       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0131 02:19:42.001298       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0131 02:19:42.100065       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0131 02:19:42.524904       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0131 02:19:58.412259       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0131 02:19:58.654022       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0131 02:20:14.614690       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0131 02:20:39.390012       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0131 02:23:13.409075       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [3e0319c1c5cace348fe59e8e57cfa53027c02a884ef70eaa172a516762cc3d96] <==
	I0131 02:19:58.660394       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"2dbece5b-ea27-4e60-8321-760ce9bcd855", APIVersion:"apps/v1", ResourceVersion:"226", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-hlmjz
	I0131 02:19:58.701811       1 shared_informer.go:230] Caches are synced for taint 
	I0131 02:19:58.702063       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0131 02:19:58.702244       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-757160. Assuming now as a timestamp.
	I0131 02:19:58.702477       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0131 02:19:58.702522       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0131 02:19:58.703504       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-757160", UID:"292f647a-dcfc-4bf3-9469-6e3bc799b020", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-757160 event: Registered Node ingress-addon-legacy-757160 in Controller
	I0131 02:19:58.852930       1 shared_informer.go:230] Caches are synced for resource quota 
	I0131 02:19:58.853058       1 shared_informer.go:230] Caches are synced for attach detach 
	I0131 02:19:58.907717       1 shared_informer.go:230] Caches are synced for service account 
	I0131 02:19:58.924903       1 shared_informer.go:230] Caches are synced for namespace 
	I0131 02:19:58.928212       1 shared_informer.go:230] Caches are synced for resource quota 
	I0131 02:19:58.997652       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0131 02:19:58.997688       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0131 02:19:59.004495       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0131 02:19:59.188126       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e2b6f71c-62e1-4c9c-b9bc-07343e2a5107", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0131 02:19:59.231920       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2cfb66fa-90a3-4ac3-bbb0-2ea241adace3", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-f5xtf
	I0131 02:20:14.609102       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"549c8e11-9d8d-4f82-b9f6-8987e09550bc", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0131 02:20:14.632891       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"32dd9f8d-bfea-47a4-b544-bb740b44c1e6", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-p2pd2
	I0131 02:20:14.667032       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1f166f2c-6b91-4182-b7d8-08810943c785", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-x56rr
	I0131 02:20:14.735696       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d9a5e642-9bcd-48f1-9833-07524e4d459d", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-qz5pg
	I0131 02:20:19.617536       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1f166f2c-6b91-4182-b7d8-08810943c785", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0131 02:20:20.614605       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d9a5e642-9bcd-48f1-9833-07524e4d459d", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0131 02:23:02.528039       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"ad7dd45d-8f7b-4f7c-8809-68de93422c03", APIVersion:"apps/v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0131 02:23:02.538012       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"f1bb08e8-dd24-46a0-96d7-f9e2a18f7366", APIVersion:"apps/v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-wxckq
	
	
	==> kube-proxy [872a1ce4d8b6f59ed40341a5e230ef0ceb3c0b90f6b89c9656434cad3be7004b] <==
	W0131 02:20:00.152011       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0131 02:20:00.160353       1 node.go:136] Successfully retrieved node IP: 192.168.39.40
	I0131 02:20:00.160402       1 server_others.go:186] Using iptables Proxier.
	I0131 02:20:00.160649       1 server.go:583] Version: v1.18.20
	I0131 02:20:00.163158       1 config.go:315] Starting service config controller
	I0131 02:20:00.163214       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0131 02:20:00.163324       1 config.go:133] Starting endpoints config controller
	I0131 02:20:00.163354       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0131 02:20:00.263498       1 shared_informer.go:230] Caches are synced for service config 
	I0131 02:20:00.263855       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [6caeeb141c9cf6ccbd03bc5d538a7de25e3634470ba63a4ae660eb72d33f6d0c] <==
	I0131 02:19:38.729184       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 02:19:38.732603       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0131 02:19:38.732664       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0131 02:19:38.734533       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 02:19:38.759642       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 02:19:38.759837       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 02:19:38.760084       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 02:19:38.760217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 02:19:38.760402       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 02:19:38.760534       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 02:19:38.760602       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 02:19:38.760711       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 02:19:38.760790       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 02:19:38.765994       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 02:19:38.767061       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 02:19:39.582632       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 02:19:39.585728       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 02:19:39.711546       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 02:19:39.897977       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 02:19:39.927399       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 02:19:39.975392       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 02:19:39.999274       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0131 02:19:42.529504       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0131 02:19:58.448687       1 factory.go:503] pod: kube-system/coredns-66bff467f8-f5xtf is already present in the active queue
	E0131 02:19:58.465744       1 factory.go:503] pod: kube-system/coredns-66bff467f8-8rglf is already present in the active queue
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 02:19:07 UTC, ends at Wed 2024-01-31 02:23:21 UTC. --
	Jan 31 02:20:21 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:20:21.676912    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5-ingress-nginx-admission-token-7dtgj" (OuterVolumeSpecName: "ingress-nginx-admission-token-7dtgj") pod "9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5" (UID: "9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5"). InnerVolumeSpecName "ingress-nginx-admission-token-7dtgj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 31 02:20:21 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:20:21.775325    1436 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-7dtgj" (UniqueName: "kubernetes.io/secret/9a634121-6ce5-4cca-ae1e-4b9b8adfd7f5-ingress-nginx-admission-token-7dtgj") on node "ingress-addon-legacy-757160" DevicePath ""
	Jan 31 02:20:30 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:20:30.907788    1436 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 31 02:20:31 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:20:31.009409    1436 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-wkxmc" (UniqueName: "kubernetes.io/secret/94e85f2c-950a-459e-8dad-dc1b91c25421-minikube-ingress-dns-token-wkxmc") pod "kube-ingress-dns-minikube" (UID: "94e85f2c-950a-459e-8dad-dc1b91c25421")
	Jan 31 02:20:39 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:20:39.568113    1436 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 31 02:20:39 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:20:39.633873    1436 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ntrfk" (UniqueName: "kubernetes.io/secret/77f68f24-e339-478e-9821-f76c8e02fee9-default-token-ntrfk") pod "nginx" (UID: "77f68f24-e339-478e-9821-f76c8e02fee9")
	Jan 31 02:23:02 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:02.549378    1436 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 31 02:23:02 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:02.568992    1436 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ntrfk" (UniqueName: "kubernetes.io/secret/fed9da6c-de56-4991-b0bb-8d3b98e0d23b-default-token-ntrfk") pod "hello-world-app-5f5d8b66bb-wxckq" (UID: "fed9da6c-de56-4991-b0bb-8d3b98e0d23b")
	Jan 31 02:23:02 ingress-addon-legacy-757160 kubelet[1436]: E0131 02:23:02.989948    1436 cadvisor_stats_provider.go:400] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/podfed9da6c-de56-4991-b0bb-8d3b98e0d23b/crio-conmon-70d525ac45b0cf53e52098c1ce8597596e7f168785f2b9587ebff8c6427e58d7": RecentStats: unable to find data in memory cache]
	Jan 31 02:23:04 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:04.463597    1436 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fba7002639ac6a42b02a36b7d4ba681f2dc3e9aa870ad1b4c57b0d35e5f16894
	Jan 31 02:23:04 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:04.575799    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-wkxmc" (UniqueName: "kubernetes.io/secret/94e85f2c-950a-459e-8dad-dc1b91c25421-minikube-ingress-dns-token-wkxmc") pod "94e85f2c-950a-459e-8dad-dc1b91c25421" (UID: "94e85f2c-950a-459e-8dad-dc1b91c25421")
	Jan 31 02:23:04 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:04.579583    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94e85f2c-950a-459e-8dad-dc1b91c25421-minikube-ingress-dns-token-wkxmc" (OuterVolumeSpecName: "minikube-ingress-dns-token-wkxmc") pod "94e85f2c-950a-459e-8dad-dc1b91c25421" (UID: "94e85f2c-950a-459e-8dad-dc1b91c25421"). InnerVolumeSpecName "minikube-ingress-dns-token-wkxmc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 31 02:23:04 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:04.602122    1436 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fba7002639ac6a42b02a36b7d4ba681f2dc3e9aa870ad1b4c57b0d35e5f16894
	Jan 31 02:23:04 ingress-addon-legacy-757160 kubelet[1436]: E0131 02:23:04.602886    1436 remote_runtime.go:295] ContainerStatus "fba7002639ac6a42b02a36b7d4ba681f2dc3e9aa870ad1b4c57b0d35e5f16894" from runtime service failed: rpc error: code = NotFound desc = could not find container "fba7002639ac6a42b02a36b7d4ba681f2dc3e9aa870ad1b4c57b0d35e5f16894": container with ID starting with fba7002639ac6a42b02a36b7d4ba681f2dc3e9aa870ad1b4c57b0d35e5f16894 not found: ID does not exist
	Jan 31 02:23:04 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:04.676172    1436 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-wkxmc" (UniqueName: "kubernetes.io/secret/94e85f2c-950a-459e-8dad-dc1b91c25421-minikube-ingress-dns-token-wkxmc") on node "ingress-addon-legacy-757160" DevicePath ""
	Jan 31 02:23:13 ingress-addon-legacy-757160 kubelet[1436]: E0131 02:23:13.394738    1436 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-p2pd2.17af4d50d69f69f1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-p2pd2", UID:"ff6c613b-6e12-4447-b356-4af1483f3a82", APIVersion:"v1", ResourceVersion:"467", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-757160"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16689a45741fff1, ext:211451320904, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16689a45741fff1, ext:211451320904, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-p2pd2.17af4d50d69f69f1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 31 02:23:13 ingress-addon-legacy-757160 kubelet[1436]: E0131 02:23:13.406538    1436 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-p2pd2.17af4d50d69f69f1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-p2pd2", UID:"ff6c613b-6e12-4447-b356-4af1483f3a82", APIVersion:"v1", ResourceVersion:"467", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-757160"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16689a45741fff1, ext:211451320904, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16689a457f57000, ext:211463080535, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-p2pd2.17af4d50d69f69f1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 31 02:23:16 ingress-addon-legacy-757160 kubelet[1436]: W0131 02:23:16.504401    1436 pod_container_deletor.go:77] Container "92405e1a97388b2f05b5f1429d2a1ae593d2dea631c24ab3af73673fc4a848b6" not found in pod's containers
	Jan 31 02:23:17 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:17.514583    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-6fhb7" (UniqueName: "kubernetes.io/secret/ff6c613b-6e12-4447-b356-4af1483f3a82-ingress-nginx-token-6fhb7") pod "ff6c613b-6e12-4447-b356-4af1483f3a82" (UID: "ff6c613b-6e12-4447-b356-4af1483f3a82")
	Jan 31 02:23:17 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:17.514616    1436 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ff6c613b-6e12-4447-b356-4af1483f3a82-webhook-cert") pod "ff6c613b-6e12-4447-b356-4af1483f3a82" (UID: "ff6c613b-6e12-4447-b356-4af1483f3a82")
	Jan 31 02:23:17 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:17.518610    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6c613b-6e12-4447-b356-4af1483f3a82-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ff6c613b-6e12-4447-b356-4af1483f3a82" (UID: "ff6c613b-6e12-4447-b356-4af1483f3a82"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 31 02:23:17 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:17.518713    1436 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6c613b-6e12-4447-b356-4af1483f3a82-ingress-nginx-token-6fhb7" (OuterVolumeSpecName: "ingress-nginx-token-6fhb7") pod "ff6c613b-6e12-4447-b356-4af1483f3a82" (UID: "ff6c613b-6e12-4447-b356-4af1483f3a82"). InnerVolumeSpecName "ingress-nginx-token-6fhb7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 31 02:23:17 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:17.615015    1436 reconciler.go:319] Volume detached for volume "ingress-nginx-token-6fhb7" (UniqueName: "kubernetes.io/secret/ff6c613b-6e12-4447-b356-4af1483f3a82-ingress-nginx-token-6fhb7") on node "ingress-addon-legacy-757160" DevicePath ""
	Jan 31 02:23:17 ingress-addon-legacy-757160 kubelet[1436]: I0131 02:23:17.615068    1436 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ff6c613b-6e12-4447-b356-4af1483f3a82-webhook-cert") on node "ingress-addon-legacy-757160" DevicePath ""
	Jan 31 02:23:18 ingress-addon-legacy-757160 kubelet[1436]: W0131 02:23:18.479569    1436 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ff6c613b-6e12-4447-b356-4af1483f3a82/volumes" does not exist
	
	
	==> storage-provisioner [79542be60c1f1354e9d28cf6c21dfbbbe92c8cfd5ba2018b1303ae4c27c73739] <==
	I0131 02:20:00.794523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 02:20:00.812185       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 02:20:00.812314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 02:20:00.820299       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 02:20:00.821598       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-757160_d880945f-4f38-4664-bf9e-8ae4204f5730!
	I0131 02:20:00.821413       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70863b03-a1d5-4a26-96b3-d34909c7cfe0", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-757160_d880945f-4f38-4664-bf9e-8ae4204f5730 became leader
	I0131 02:20:00.922677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-757160_d880945f-4f38-4664-bf9e-8ae4204f5730!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-757160 -n ingress-addon-legacy-757160
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-757160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (170.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (687.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-263108
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-263108
E0131 02:32:48.510176 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-263108: exit status 82 (2m0.304186119s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-263108"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-263108" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263108 --wait=true -v=8 --alsologtostderr
E0131 02:33:38.352242 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:35:01.394449 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:35:30.923844 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:37:48.510910 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:38:38.351339 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:39:11.554258 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:40:30.924100 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:41:53.974516 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:42:48.510918 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263108 --wait=true -v=8 --alsologtostderr: (9m23.777965389s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-263108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-263108 -n multinode-263108
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-263108 logs -n 25: (1.597853518s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:30 UTC | 31 Jan 24 02:30 UTC |
	|         | multinode-263108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m02:/home/docker/cp-test.txt                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:30 UTC | 31 Jan 24 02:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2294290134/001/cp-test_multinode-263108-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:30 UTC | 31 Jan 24 02:30 UTC |
	|         | multinode-263108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m02:/home/docker/cp-test.txt                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:30 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108:/home/docker/cp-test_multinode-263108-m02_multinode-263108.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n multinode-263108 sudo cat                                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /home/docker/cp-test_multinode-263108-m02_multinode-263108.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m02:/home/docker/cp-test.txt                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03:/home/docker/cp-test_multinode-263108-m02_multinode-263108-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n multinode-263108-m03 sudo cat                                   | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /home/docker/cp-test_multinode-263108-m02_multinode-263108-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp testdata/cp-test.txt                                                | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2294290134/001/cp-test_multinode-263108-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108:/home/docker/cp-test_multinode-263108-m03_multinode-263108.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n multinode-263108 sudo cat                                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /home/docker/cp-test_multinode-263108-m03_multinode-263108.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt                       | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m02:/home/docker/cp-test_multinode-263108-m03_multinode-263108-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n multinode-263108-m02 sudo cat                                   | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /home/docker/cp-test_multinode-263108-m03_multinode-263108-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-263108 node stop m03                                                          | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	| node    | multinode-263108 node start                                                             | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-263108                                                                | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC |                     |
	| stop    | -p multinode-263108                                                                     | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC |                     |
	| start   | -p multinode-263108                                                                     | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:33 UTC | 31 Jan 24 02:42 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-263108                                                                | multinode-263108 | jenkins | v1.32.0 | 31 Jan 24 02:42 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:33:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:33:35.980655 1436700 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:33:35.980917 1436700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:33:35.980925 1436700 out.go:309] Setting ErrFile to fd 2...
	I0131 02:33:35.980930 1436700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:33:35.981133 1436700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:33:35.981701 1436700 out.go:303] Setting JSON to false
	I0131 02:33:35.982763 1436700 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":26159,"bootTime":1706642257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:33:35.982826 1436700 start.go:138] virtualization: kvm guest
	I0131 02:33:35.985283 1436700 out.go:177] * [multinode-263108] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:33:35.987174 1436700 notify.go:220] Checking for updates...
	I0131 02:33:35.988582 1436700 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 02:33:35.990139 1436700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:33:35.992061 1436700 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:33:35.993449 1436700 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:33:35.994787 1436700 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 02:33:35.996093 1436700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 02:33:35.997779 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:33:35.997923 1436700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:33:35.998403 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:33:35.998468 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:33:36.013812 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36559
	I0131 02:33:36.014335 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:33:36.014889 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:33:36.014914 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:33:36.015332 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:33:36.015558 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:33:36.053772 1436700 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 02:33:36.055361 1436700 start.go:298] selected driver: kvm2
	I0131 02:33:36.055381 1436700 start.go:902] validating driver "kvm2" against &{Name:multinode-263108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:33:36.055521 1436700 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 02:33:36.055877 1436700 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:33:36.055959 1436700 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:33:36.071113 1436700 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:33:36.072117 1436700 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 02:33:36.072209 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:33:36.072223 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:33:36.072235 1436700 start_flags.go:321] config:
	{Name:multinode-263108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-263108 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:33:36.072543 1436700 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:33:36.074448 1436700 out.go:177] * Starting control plane node multinode-263108 in cluster multinode-263108
	I0131 02:33:36.075834 1436700 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:33:36.075870 1436700 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 02:33:36.075882 1436700 cache.go:56] Caching tarball of preloaded images
	I0131 02:33:36.075975 1436700 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 02:33:36.075990 1436700 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 02:33:36.076138 1436700 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/config.json ...
	I0131 02:33:36.076350 1436700 start.go:365] acquiring machines lock for multinode-263108: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:33:36.076404 1436700 start.go:369] acquired machines lock for "multinode-263108" in 31.968µs
	I0131 02:33:36.076423 1436700 start.go:96] Skipping create...Using existing machine configuration
	I0131 02:33:36.076453 1436700 fix.go:54] fixHost starting: 
	I0131 02:33:36.076765 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:33:36.076811 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:33:36.091243 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0131 02:33:36.091753 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:33:36.092205 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:33:36.092227 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:33:36.092545 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:33:36.092735 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:33:36.092873 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetState
	I0131 02:33:36.094511 1436700 fix.go:102] recreateIfNeeded on multinode-263108: state=Running err=<nil>
	W0131 02:33:36.094548 1436700 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 02:33:36.096593 1436700 out.go:177] * Updating the running kvm2 "multinode-263108" VM ...
	I0131 02:33:36.098051 1436700 machine.go:88] provisioning docker machine ...
	I0131 02:33:36.098070 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:33:36.098258 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetMachineName
	I0131 02:33:36.098467 1436700 buildroot.go:166] provisioning hostname "multinode-263108"
	I0131 02:33:36.098507 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetMachineName
	I0131 02:33:36.098735 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:33:36.101093 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:33:36.101646 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:33:36.101683 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:33:36.101846 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:33:36.102024 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:33:36.102535 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:33:36.103454 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:33:36.104331 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:33:36.104811 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0131 02:33:36.104831 1436700 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-263108 && echo "multinode-263108" | sudo tee /etc/hostname
	I0131 02:33:54.494914 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:00.574838 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:03.646925 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:09.726818 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:12.798775 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:18.878840 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:21.950794 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:28.030883 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:31.102773 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:37.182737 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:40.254793 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:46.334915 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:49.406797 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:55.486848 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:34:58.558764 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:04.638914 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:07.710914 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:13.790752 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:16.862849 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:22.942860 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:26.014785 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:32.094864 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:35.166831 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:41.246833 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:44.318836 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:50.398812 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:53.470834 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:35:59.550764 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:02.622794 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:08.702783 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:11.774775 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:17.854826 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:20.926774 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:27.006797 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:30.078806 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:36.158778 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:39.230753 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:45.310844 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:48.382747 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:54.462796 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:36:57.534854 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:03.614810 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:06.686808 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:12.766815 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:15.838792 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:21.918815 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:24.990784 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:31.074771 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:34.142774 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:40.223177 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:43.294726 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:49.374772 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:52.446811 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:37:58.526775 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:01.598846 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:07.678824 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:10.751113 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:16.830766 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:19.902776 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:25.982797 1436700 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.109:22: connect: no route to host
	I0131 02:38:28.985177 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:38:28.985260 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:28.987575 1436700 machine.go:91] provisioned docker machine in 4m52.889504333s
	I0131 02:38:28.987622 1436700 fix.go:56] fixHost completed within 4m52.9111704s
	I0131 02:38:28.987629 1436700 start.go:83] releasing machines lock for "multinode-263108", held for 4m52.911215221s
	W0131 02:38:28.987648 1436700 start.go:694] error starting host: provision: host is not running
	W0131 02:38:28.987785 1436700 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 02:38:28.987795 1436700 start.go:709] Will try again in 5 seconds ...
	I0131 02:38:33.989893 1436700 start.go:365] acquiring machines lock for multinode-263108: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:38:33.990031 1436700 start.go:369] acquired machines lock for "multinode-263108" in 70.845µs
	I0131 02:38:33.990058 1436700 start.go:96] Skipping create...Using existing machine configuration
	I0131 02:38:33.990065 1436700 fix.go:54] fixHost starting: 
	I0131 02:38:33.990395 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:38:33.990418 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:38:34.005871 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34119
	I0131 02:38:34.006436 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:38:34.007044 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:38:34.007073 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:38:34.007470 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:38:34.007677 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:34.007834 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetState
	I0131 02:38:34.009843 1436700 fix.go:102] recreateIfNeeded on multinode-263108: state=Stopped err=<nil>
	I0131 02:38:34.009865 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	W0131 02:38:34.010051 1436700 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 02:38:34.012248 1436700 out.go:177] * Restarting existing kvm2 VM for "multinode-263108" ...
	I0131 02:38:34.013744 1436700 main.go:141] libmachine: (multinode-263108) Calling .Start
	I0131 02:38:34.013972 1436700 main.go:141] libmachine: (multinode-263108) Ensuring networks are active...
	I0131 02:38:34.014881 1436700 main.go:141] libmachine: (multinode-263108) Ensuring network default is active
	I0131 02:38:34.015315 1436700 main.go:141] libmachine: (multinode-263108) Ensuring network mk-multinode-263108 is active
	I0131 02:38:34.015701 1436700 main.go:141] libmachine: (multinode-263108) Getting domain xml...
	I0131 02:38:34.016669 1436700 main.go:141] libmachine: (multinode-263108) Creating domain...
	I0131 02:38:35.231950 1436700 main.go:141] libmachine: (multinode-263108) Waiting to get IP...
	I0131 02:38:35.232873 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:35.233308 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:35.233371 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:35.233269 1437500 retry.go:31] will retry after 209.24403ms: waiting for machine to come up
	I0131 02:38:35.443715 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:35.444294 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:35.444323 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:35.444229 1437500 retry.go:31] will retry after 334.520142ms: waiting for machine to come up
	I0131 02:38:35.780834 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:35.781358 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:35.781394 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:35.781329 1437500 retry.go:31] will retry after 479.997927ms: waiting for machine to come up
	I0131 02:38:36.263258 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:36.263747 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:36.263778 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:36.263700 1437500 retry.go:31] will retry after 552.390174ms: waiting for machine to come up
	I0131 02:38:36.817415 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:36.817903 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:36.817926 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:36.817852 1437500 retry.go:31] will retry after 600.111617ms: waiting for machine to come up
	I0131 02:38:37.419308 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:37.419680 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:37.419710 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:37.419635 1437500 retry.go:31] will retry after 911.519648ms: waiting for machine to come up
	I0131 02:38:38.332497 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:38.332903 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:38.332938 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:38.332833 1437500 retry.go:31] will retry after 826.335215ms: waiting for machine to come up
	I0131 02:38:39.161305 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:39.161848 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:39.161886 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:39.161790 1437500 retry.go:31] will retry after 1.305356857s: waiting for machine to come up
	I0131 02:38:40.468546 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:40.468972 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:40.469004 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:40.468917 1437500 retry.go:31] will retry after 1.739564175s: waiting for machine to come up
	I0131 02:38:42.210896 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:42.211399 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:42.211432 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:42.211328 1437500 retry.go:31] will retry after 1.78186367s: waiting for machine to come up
	I0131 02:38:43.995452 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:43.995986 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:43.996012 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:43.995952 1437500 retry.go:31] will retry after 2.254367544s: waiting for machine to come up
	I0131 02:38:46.252113 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:46.252565 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:46.252591 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:46.252527 1437500 retry.go:31] will retry after 3.206444503s: waiting for machine to come up
	I0131 02:38:49.460291 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:49.460797 1436700 main.go:141] libmachine: (multinode-263108) DBG | unable to find current IP address of domain multinode-263108 in network mk-multinode-263108
	I0131 02:38:49.460832 1436700 main.go:141] libmachine: (multinode-263108) DBG | I0131 02:38:49.460733 1437500 retry.go:31] will retry after 3.011465996s: waiting for machine to come up
	I0131 02:38:52.475929 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.476389 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has current primary IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.476408 1436700 main.go:141] libmachine: (multinode-263108) Found IP for machine: 192.168.39.109
	I0131 02:38:52.476419 1436700 main.go:141] libmachine: (multinode-263108) Reserving static IP address...
	I0131 02:38:52.476957 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "multinode-263108", mac: "52:54:00:35:a7:c9", ip: "192.168.39.109"} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.476985 1436700 main.go:141] libmachine: (multinode-263108) DBG | skip adding static IP to network mk-multinode-263108 - found existing host DHCP lease matching {name: "multinode-263108", mac: "52:54:00:35:a7:c9", ip: "192.168.39.109"}
	I0131 02:38:52.477003 1436700 main.go:141] libmachine: (multinode-263108) Reserved static IP address: 192.168.39.109
	I0131 02:38:52.477021 1436700 main.go:141] libmachine: (multinode-263108) Waiting for SSH to be available...
	I0131 02:38:52.477034 1436700 main.go:141] libmachine: (multinode-263108) DBG | Getting to WaitForSSH function...
	I0131 02:38:52.479714 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.480026 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.480053 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.480159 1436700 main.go:141] libmachine: (multinode-263108) DBG | Using SSH client type: external
	I0131 02:38:52.480189 1436700 main.go:141] libmachine: (multinode-263108) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa (-rw-------)
	I0131 02:38:52.480233 1436700 main.go:141] libmachine: (multinode-263108) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.109 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 02:38:52.480259 1436700 main.go:141] libmachine: (multinode-263108) DBG | About to run SSH command:
	I0131 02:38:52.480293 1436700 main.go:141] libmachine: (multinode-263108) DBG | exit 0
	I0131 02:38:52.570152 1436700 main.go:141] libmachine: (multinode-263108) DBG | SSH cmd err, output: <nil>: 
	I0131 02:38:52.570649 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetConfigRaw
	I0131 02:38:52.571469 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetIP
	I0131 02:38:52.574323 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.575007 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.575043 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.575354 1436700 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/config.json ...
	I0131 02:38:52.575601 1436700 machine.go:88] provisioning docker machine ...
	I0131 02:38:52.575623 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:52.575914 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetMachineName
	I0131 02:38:52.576116 1436700 buildroot.go:166] provisioning hostname "multinode-263108"
	I0131 02:38:52.576139 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetMachineName
	I0131 02:38:52.576318 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:52.578692 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.579023 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.579044 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.579202 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:52.579408 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:52.579576 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:52.579738 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:52.579913 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:38:52.580352 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0131 02:38:52.580365 1436700 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-263108 && echo "multinode-263108" | sudo tee /etc/hostname
	I0131 02:38:52.706162 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263108
	
	I0131 02:38:52.706204 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:52.709361 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.709820 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.709850 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.710068 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:52.710290 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:52.710522 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:52.710693 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:52.710912 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:38:52.711280 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0131 02:38:52.711300 1436700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-263108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-263108/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-263108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 02:38:52.834270 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:38:52.834308 1436700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 02:38:52.834337 1436700 buildroot.go:174] setting up certificates
	I0131 02:38:52.834352 1436700 provision.go:83] configureAuth start
	I0131 02:38:52.834369 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetMachineName
	I0131 02:38:52.834711 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetIP
	I0131 02:38:52.837731 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.838193 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.838223 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.838375 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:52.840903 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.841291 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:52.841318 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:52.841437 1436700 provision.go:138] copyHostCerts
	I0131 02:38:52.841473 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:38:52.841512 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 02:38:52.841524 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:38:52.841603 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 02:38:52.841721 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:38:52.841747 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 02:38:52.841757 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:38:52.841795 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 02:38:52.841894 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:38:52.841914 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 02:38:52.841920 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:38:52.841959 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 02:38:52.842021 1436700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.multinode-263108 san=[192.168.39.109 192.168.39.109 localhost 127.0.0.1 minikube multinode-263108]
	I0131 02:38:53.087687 1436700 provision.go:172] copyRemoteCerts
	I0131 02:38:53.087757 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 02:38:53.087810 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:53.091027 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.091365 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.091396 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.091570 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:53.091796 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.091969 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:53.092104 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:38:53.179987 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0131 02:38:53.180111 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 02:38:53.202544 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0131 02:38:53.202623 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0131 02:38:53.223190 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0131 02:38:53.223252 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 02:38:53.243355 1436700 provision.go:86] duration metric: configureAuth took 408.982141ms
	I0131 02:38:53.243392 1436700 buildroot.go:189] setting minikube options for container-runtime
	I0131 02:38:53.243627 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:38:53.243717 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:53.246843 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.247341 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.247371 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.247593 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:53.247846 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.248050 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.248187 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:53.248342 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:38:53.248751 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0131 02:38:53.248775 1436700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 02:38:53.546348 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 02:38:53.546388 1436700 machine.go:91] provisioned docker machine in 970.770599ms
	I0131 02:38:53.546404 1436700 start.go:300] post-start starting for "multinode-263108" (driver="kvm2")
	I0131 02:38:53.546419 1436700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 02:38:53.546448 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:53.546846 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 02:38:53.546887 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:53.549858 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.550289 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.550325 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.550514 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:53.550754 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.550988 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:53.551164 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:38:53.641532 1436700 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 02:38:53.645567 1436700 command_runner.go:130] > NAME=Buildroot
	I0131 02:38:53.645588 1436700 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0131 02:38:53.645593 1436700 command_runner.go:130] > ID=buildroot
	I0131 02:38:53.645598 1436700 command_runner.go:130] > VERSION_ID=2021.02.12
	I0131 02:38:53.645603 1436700 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0131 02:38:53.645640 1436700 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 02:38:53.645659 1436700 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 02:38:53.645737 1436700 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 02:38:53.645829 1436700 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 02:38:53.645839 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /etc/ssl/certs/14199762.pem
	I0131 02:38:53.645942 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 02:38:53.654318 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:38:53.674443 1436700 start.go:303] post-start completed in 128.023958ms
	I0131 02:38:53.674471 1436700 fix.go:56] fixHost completed within 19.684406161s
	I0131 02:38:53.674515 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:53.677239 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.677650 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.677693 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.677914 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:53.678111 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.678271 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.678376 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:53.678536 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:38:53.678901 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I0131 02:38:53.678916 1436700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 02:38:53.790894 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706668733.744455971
	
	I0131 02:38:53.790921 1436700 fix.go:206] guest clock: 1706668733.744455971
	I0131 02:38:53.790932 1436700 fix.go:219] Guest: 2024-01-31 02:38:53.744455971 +0000 UTC Remote: 2024-01-31 02:38:53.674475779 +0000 UTC m=+317.746950201 (delta=69.980192ms)
	I0131 02:38:53.790961 1436700 fix.go:190] guest clock delta is within tolerance: 69.980192ms
	I0131 02:38:53.790972 1436700 start.go:83] releasing machines lock for "multinode-263108", held for 19.800927424s
	I0131 02:38:53.791005 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:53.791340 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetIP
	I0131 02:38:53.794645 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.795162 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.795194 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.795380 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:53.795966 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:53.796159 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:38:53.796253 1436700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 02:38:53.796339 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:53.796374 1436700 ssh_runner.go:195] Run: cat /version.json
	I0131 02:38:53.796416 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:38:53.799097 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.799170 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.799499 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.799525 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.799554 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:53.799572 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:53.799647 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:53.799902 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:38:53.799906 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.800081 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:53.800089 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:38:53.800251 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:38:53.800265 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:38:53.800343 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:38:53.916446 1436700 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0131 02:38:53.917396 1436700 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0131 02:38:53.917557 1436700 ssh_runner.go:195] Run: systemctl --version
	I0131 02:38:53.923113 1436700 command_runner.go:130] > systemd 247 (247)
	I0131 02:38:53.923136 1436700 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0131 02:38:53.923198 1436700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 02:38:54.063412 1436700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0131 02:38:54.068995 1436700 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0131 02:38:54.069319 1436700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 02:38:54.069386 1436700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 02:38:54.083502 1436700 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0131 02:38:54.083550 1436700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 02:38:54.083561 1436700 start.go:475] detecting cgroup driver to use...
	I0131 02:38:54.083645 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 02:38:54.102077 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 02:38:54.115560 1436700 docker.go:217] disabling cri-docker service (if available) ...
	I0131 02:38:54.115661 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 02:38:54.127588 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 02:38:54.139332 1436700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 02:38:54.152092 1436700 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0131 02:38:54.237583 1436700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 02:38:54.249567 1436700 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0131 02:38:54.345900 1436700 docker.go:233] disabling docker service ...
	I0131 02:38:54.345997 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 02:38:54.358674 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 02:38:54.369571 1436700 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0131 02:38:54.369696 1436700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 02:38:54.382376 1436700 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0131 02:38:54.472661 1436700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 02:38:54.484430 1436700 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0131 02:38:54.484709 1436700 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0131 02:38:54.572772 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 02:38:54.585496 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 02:38:54.601360 1436700 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0131 02:38:54.601414 1436700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 02:38:54.601481 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:38:54.610372 1436700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 02:38:54.610459 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:38:54.620857 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:38:54.630884 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:38:54.641260 1436700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 02:38:54.651988 1436700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 02:38:54.661605 1436700 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 02:38:54.661650 1436700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 02:38:54.661705 1436700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 02:38:54.676282 1436700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 02:38:54.686061 1436700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 02:38:54.793525 1436700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 02:38:54.974030 1436700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 02:38:54.974120 1436700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 02:38:54.979088 1436700 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0131 02:38:54.979113 1436700 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0131 02:38:54.979129 1436700 command_runner.go:130] > Device: 16h/22d	Inode: 749         Links: 1
	I0131 02:38:54.979138 1436700 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0131 02:38:54.979146 1436700 command_runner.go:130] > Access: 2024-01-31 02:38:54.910809878 +0000
	I0131 02:38:54.979155 1436700 command_runner.go:130] > Modify: 2024-01-31 02:38:54.910809878 +0000
	I0131 02:38:54.979164 1436700 command_runner.go:130] > Change: 2024-01-31 02:38:54.910809878 +0000
	I0131 02:38:54.979174 1436700 command_runner.go:130] >  Birth: -
	I0131 02:38:54.979196 1436700 start.go:543] Will wait 60s for crictl version
	I0131 02:38:54.979248 1436700 ssh_runner.go:195] Run: which crictl
	I0131 02:38:54.982594 1436700 command_runner.go:130] > /usr/bin/crictl
	I0131 02:38:54.982659 1436700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 02:38:55.018668 1436700 command_runner.go:130] > Version:  0.1.0
	I0131 02:38:55.018697 1436700 command_runner.go:130] > RuntimeName:  cri-o
	I0131 02:38:55.018705 1436700 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0131 02:38:55.018714 1436700 command_runner.go:130] > RuntimeApiVersion:  v1
	I0131 02:38:55.020307 1436700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 02:38:55.020402 1436700 ssh_runner.go:195] Run: crio --version
	I0131 02:38:55.065604 1436700 command_runner.go:130] > crio version 1.24.1
	I0131 02:38:55.065637 1436700 command_runner.go:130] > Version:          1.24.1
	I0131 02:38:55.065649 1436700 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0131 02:38:55.065656 1436700 command_runner.go:130] > GitTreeState:     dirty
	I0131 02:38:55.065666 1436700 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0131 02:38:55.065675 1436700 command_runner.go:130] > GoVersion:        go1.19.9
	I0131 02:38:55.065682 1436700 command_runner.go:130] > Compiler:         gc
	I0131 02:38:55.065690 1436700 command_runner.go:130] > Platform:         linux/amd64
	I0131 02:38:55.065699 1436700 command_runner.go:130] > Linkmode:         dynamic
	I0131 02:38:55.065715 1436700 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0131 02:38:55.065733 1436700 command_runner.go:130] > SeccompEnabled:   true
	I0131 02:38:55.065740 1436700 command_runner.go:130] > AppArmorEnabled:  false
	I0131 02:38:55.067043 1436700 ssh_runner.go:195] Run: crio --version
	I0131 02:38:55.113936 1436700 command_runner.go:130] > crio version 1.24.1
	I0131 02:38:55.113971 1436700 command_runner.go:130] > Version:          1.24.1
	I0131 02:38:55.113983 1436700 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0131 02:38:55.113990 1436700 command_runner.go:130] > GitTreeState:     dirty
	I0131 02:38:55.114004 1436700 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0131 02:38:55.114012 1436700 command_runner.go:130] > GoVersion:        go1.19.9
	I0131 02:38:55.114019 1436700 command_runner.go:130] > Compiler:         gc
	I0131 02:38:55.114026 1436700 command_runner.go:130] > Platform:         linux/amd64
	I0131 02:38:55.114035 1436700 command_runner.go:130] > Linkmode:         dynamic
	I0131 02:38:55.114045 1436700 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0131 02:38:55.114053 1436700 command_runner.go:130] > SeccompEnabled:   true
	I0131 02:38:55.114060 1436700 command_runner.go:130] > AppArmorEnabled:  false
	I0131 02:38:55.117095 1436700 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 02:38:55.118545 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetIP
	I0131 02:38:55.121292 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:55.121742 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:38:55.121774 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:38:55.122088 1436700 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 02:38:55.125736 1436700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:38:55.136692 1436700 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:38:55.136756 1436700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:38:55.177895 1436700 command_runner.go:130] > {
	I0131 02:38:55.177928 1436700 command_runner.go:130] >   "images": [
	I0131 02:38:55.177934 1436700 command_runner.go:130] >     {
	I0131 02:38:55.177946 1436700 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0131 02:38:55.177953 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:55.177975 1436700 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0131 02:38:55.177981 1436700 command_runner.go:130] >       ],
	I0131 02:38:55.177988 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:55.178002 1436700 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0131 02:38:55.178017 1436700 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0131 02:38:55.178023 1436700 command_runner.go:130] >       ],
	I0131 02:38:55.178037 1436700 command_runner.go:130] >       "size": "750414",
	I0131 02:38:55.178047 1436700 command_runner.go:130] >       "uid": {
	I0131 02:38:55.178054 1436700 command_runner.go:130] >         "value": "65535"
	I0131 02:38:55.178062 1436700 command_runner.go:130] >       },
	I0131 02:38:55.178068 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:55.178079 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:55.178084 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:55.178087 1436700 command_runner.go:130] >     }
	I0131 02:38:55.178091 1436700 command_runner.go:130] >   ]
	I0131 02:38:55.178094 1436700 command_runner.go:130] > }
	I0131 02:38:55.178226 1436700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 02:38:55.178279 1436700 ssh_runner.go:195] Run: which lz4
	I0131 02:38:55.181722 1436700 command_runner.go:130] > /usr/bin/lz4
	I0131 02:38:55.181860 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0131 02:38:55.181957 1436700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 02:38:55.185603 1436700 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 02:38:55.185656 1436700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 02:38:55.185686 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 02:38:56.745644 1436700 crio.go:444] Took 1.563721 seconds to copy over tarball
	I0131 02:38:56.745743 1436700 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 02:38:59.314716 1436700 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.568933939s)
	I0131 02:38:59.314749 1436700 crio.go:451] Took 2.569076 seconds to extract the tarball
	I0131 02:38:59.314762 1436700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 02:38:59.361200 1436700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:38:59.399922 1436700 command_runner.go:130] > {
	I0131 02:38:59.399947 1436700 command_runner.go:130] >   "images": [
	I0131 02:38:59.399957 1436700 command_runner.go:130] >     {
	I0131 02:38:59.399965 1436700 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0131 02:38:59.399970 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.399977 1436700 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0131 02:38:59.399981 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.399986 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.399994 1436700 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0131 02:38:59.400004 1436700 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0131 02:38:59.400010 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400019 1436700 command_runner.go:130] >       "size": "65258016",
	I0131 02:38:59.400026 1436700 command_runner.go:130] >       "uid": null,
	I0131 02:38:59.400030 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400035 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400042 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400046 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400054 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400060 1436700 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0131 02:38:59.400066 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400073 1436700 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0131 02:38:59.400079 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400084 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400091 1436700 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0131 02:38:59.400100 1436700 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0131 02:38:59.400104 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400127 1436700 command_runner.go:130] >       "size": "31470524",
	I0131 02:38:59.400134 1436700 command_runner.go:130] >       "uid": null,
	I0131 02:38:59.400138 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400142 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400146 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400150 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400156 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400162 1436700 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0131 02:38:59.400169 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400174 1436700 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0131 02:38:59.400180 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400185 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400203 1436700 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0131 02:38:59.400231 1436700 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0131 02:38:59.400241 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400245 1436700 command_runner.go:130] >       "size": "53621675",
	I0131 02:38:59.400252 1436700 command_runner.go:130] >       "uid": null,
	I0131 02:38:59.400256 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400260 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400266 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400270 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400276 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400281 1436700 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0131 02:38:59.400288 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400293 1436700 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0131 02:38:59.400299 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400303 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400311 1436700 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0131 02:38:59.400320 1436700 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0131 02:38:59.400332 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400341 1436700 command_runner.go:130] >       "size": "295456551",
	I0131 02:38:59.400345 1436700 command_runner.go:130] >       "uid": {
	I0131 02:38:59.400352 1436700 command_runner.go:130] >         "value": "0"
	I0131 02:38:59.400355 1436700 command_runner.go:130] >       },
	I0131 02:38:59.400360 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400364 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400371 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400374 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400378 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400384 1436700 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0131 02:38:59.400389 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400394 1436700 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0131 02:38:59.400400 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400406 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400415 1436700 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0131 02:38:59.400424 1436700 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0131 02:38:59.400430 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400435 1436700 command_runner.go:130] >       "size": "127226832",
	I0131 02:38:59.400444 1436700 command_runner.go:130] >       "uid": {
	I0131 02:38:59.400451 1436700 command_runner.go:130] >         "value": "0"
	I0131 02:38:59.400455 1436700 command_runner.go:130] >       },
	I0131 02:38:59.400459 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400464 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400469 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400473 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400476 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400484 1436700 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0131 02:38:59.400490 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400496 1436700 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0131 02:38:59.400502 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400507 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400517 1436700 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0131 02:38:59.400527 1436700 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0131 02:38:59.400532 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400537 1436700 command_runner.go:130] >       "size": "123261750",
	I0131 02:38:59.400540 1436700 command_runner.go:130] >       "uid": {
	I0131 02:38:59.400546 1436700 command_runner.go:130] >         "value": "0"
	I0131 02:38:59.400552 1436700 command_runner.go:130] >       },
	I0131 02:38:59.400556 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400560 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400566 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400570 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400576 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400581 1436700 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0131 02:38:59.400586 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400591 1436700 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0131 02:38:59.400597 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400601 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400610 1436700 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0131 02:38:59.400620 1436700 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0131 02:38:59.400624 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400630 1436700 command_runner.go:130] >       "size": "74749335",
	I0131 02:38:59.400635 1436700 command_runner.go:130] >       "uid": null,
	I0131 02:38:59.400639 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400646 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400652 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400656 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400659 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400665 1436700 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0131 02:38:59.400671 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400676 1436700 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0131 02:38:59.400682 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400686 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400705 1436700 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0131 02:38:59.400715 1436700 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0131 02:38:59.400718 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400723 1436700 command_runner.go:130] >       "size": "61551410",
	I0131 02:38:59.400728 1436700 command_runner.go:130] >       "uid": {
	I0131 02:38:59.400732 1436700 command_runner.go:130] >         "value": "0"
	I0131 02:38:59.400738 1436700 command_runner.go:130] >       },
	I0131 02:38:59.400742 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400746 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400753 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400759 1436700 command_runner.go:130] >     },
	I0131 02:38:59.400763 1436700 command_runner.go:130] >     {
	I0131 02:38:59.400769 1436700 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0131 02:38:59.400776 1436700 command_runner.go:130] >       "repoTags": [
	I0131 02:38:59.400781 1436700 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0131 02:38:59.400790 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400796 1436700 command_runner.go:130] >       "repoDigests": [
	I0131 02:38:59.400817 1436700 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0131 02:38:59.400829 1436700 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0131 02:38:59.400835 1436700 command_runner.go:130] >       ],
	I0131 02:38:59.400841 1436700 command_runner.go:130] >       "size": "750414",
	I0131 02:38:59.400848 1436700 command_runner.go:130] >       "uid": {
	I0131 02:38:59.400853 1436700 command_runner.go:130] >         "value": "65535"
	I0131 02:38:59.400859 1436700 command_runner.go:130] >       },
	I0131 02:38:59.400863 1436700 command_runner.go:130] >       "username": "",
	I0131 02:38:59.400867 1436700 command_runner.go:130] >       "spec": null,
	I0131 02:38:59.400874 1436700 command_runner.go:130] >       "pinned": false
	I0131 02:38:59.400881 1436700 command_runner.go:130] >     }
	I0131 02:38:59.400887 1436700 command_runner.go:130] >   ]
	I0131 02:38:59.400890 1436700 command_runner.go:130] > }
	I0131 02:38:59.401129 1436700 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 02:38:59.401144 1436700 cache_images.go:84] Images are preloaded, skipping loading
	I0131 02:38:59.401202 1436700 ssh_runner.go:195] Run: crio config
	I0131 02:38:59.446136 1436700 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0131 02:38:59.446174 1436700 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0131 02:38:59.446181 1436700 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0131 02:38:59.446185 1436700 command_runner.go:130] > #
	I0131 02:38:59.446192 1436700 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0131 02:38:59.446198 1436700 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0131 02:38:59.446212 1436700 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0131 02:38:59.446223 1436700 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0131 02:38:59.446245 1436700 command_runner.go:130] > # reload'.
	I0131 02:38:59.446252 1436700 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0131 02:38:59.446258 1436700 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0131 02:38:59.446265 1436700 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0131 02:38:59.446274 1436700 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0131 02:38:59.446278 1436700 command_runner.go:130] > [crio]
	I0131 02:38:59.446290 1436700 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0131 02:38:59.446303 1436700 command_runner.go:130] > # containers images, in this directory.
	I0131 02:38:59.446323 1436700 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0131 02:38:59.446337 1436700 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0131 02:38:59.446387 1436700 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0131 02:38:59.446405 1436700 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0131 02:38:59.446415 1436700 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0131 02:38:59.446589 1436700 command_runner.go:130] > storage_driver = "overlay"
	I0131 02:38:59.446614 1436700 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0131 02:38:59.446625 1436700 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0131 02:38:59.446633 1436700 command_runner.go:130] > storage_option = [
	I0131 02:38:59.446733 1436700 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0131 02:38:59.446792 1436700 command_runner.go:130] > ]
	I0131 02:38:59.446808 1436700 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0131 02:38:59.446819 1436700 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0131 02:38:59.447030 1436700 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0131 02:38:59.447042 1436700 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0131 02:38:59.447066 1436700 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0131 02:38:59.447071 1436700 command_runner.go:130] > # always happen on a node reboot
	I0131 02:38:59.447428 1436700 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0131 02:38:59.447454 1436700 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0131 02:38:59.447461 1436700 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0131 02:38:59.447489 1436700 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0131 02:38:59.447649 1436700 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0131 02:38:59.447662 1436700 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0131 02:38:59.447674 1436700 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0131 02:38:59.447930 1436700 command_runner.go:130] > # internal_wipe = true
	I0131 02:38:59.447948 1436700 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0131 02:38:59.447959 1436700 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0131 02:38:59.447971 1436700 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0131 02:38:59.448254 1436700 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0131 02:38:59.448264 1436700 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0131 02:38:59.448268 1436700 command_runner.go:130] > [crio.api]
	I0131 02:38:59.448274 1436700 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0131 02:38:59.448524 1436700 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0131 02:38:59.448539 1436700 command_runner.go:130] > # IP address on which the stream server will listen.
	I0131 02:38:59.448803 1436700 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0131 02:38:59.448821 1436700 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0131 02:38:59.448831 1436700 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0131 02:38:59.449015 1436700 command_runner.go:130] > # stream_port = "0"
	I0131 02:38:59.449026 1436700 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0131 02:38:59.449354 1436700 command_runner.go:130] > # stream_enable_tls = false
	I0131 02:38:59.449370 1436700 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0131 02:38:59.449535 1436700 command_runner.go:130] > # stream_idle_timeout = ""
	I0131 02:38:59.449559 1436700 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0131 02:38:59.449571 1436700 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0131 02:38:59.449580 1436700 command_runner.go:130] > # minutes.
	I0131 02:38:59.449599 1436700 command_runner.go:130] > # stream_tls_cert = ""
	I0131 02:38:59.449614 1436700 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0131 02:38:59.449625 1436700 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0131 02:38:59.449636 1436700 command_runner.go:130] > # stream_tls_key = ""
	I0131 02:38:59.449650 1436700 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0131 02:38:59.449661 1436700 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0131 02:38:59.449673 1436700 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0131 02:38:59.449683 1436700 command_runner.go:130] > # stream_tls_ca = ""
	I0131 02:38:59.449691 1436700 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0131 02:38:59.449701 1436700 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0131 02:38:59.449713 1436700 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0131 02:38:59.449724 1436700 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0131 02:38:59.449756 1436700 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0131 02:38:59.449775 1436700 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0131 02:38:59.449782 1436700 command_runner.go:130] > [crio.runtime]
	I0131 02:38:59.449793 1436700 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0131 02:38:59.449806 1436700 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0131 02:38:59.449815 1436700 command_runner.go:130] > # "nofile=1024:2048"
	I0131 02:38:59.449875 1436700 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0131 02:38:59.449899 1436700 command_runner.go:130] > # default_ulimits = [
	I0131 02:38:59.449908 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.449917 1436700 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0131 02:38:59.449926 1436700 command_runner.go:130] > # no_pivot = false
	I0131 02:38:59.449937 1436700 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0131 02:38:59.449949 1436700 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0131 02:38:59.449961 1436700 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0131 02:38:59.449971 1436700 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0131 02:38:59.449983 1436700 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0131 02:38:59.449998 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0131 02:38:59.450008 1436700 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0131 02:38:59.450031 1436700 command_runner.go:130] > # Cgroup setting for conmon
	I0131 02:38:59.450046 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0131 02:38:59.450057 1436700 command_runner.go:130] > conmon_cgroup = "pod"
	I0131 02:38:59.450072 1436700 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0131 02:38:59.450083 1436700 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0131 02:38:59.450098 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0131 02:38:59.450114 1436700 command_runner.go:130] > conmon_env = [
	I0131 02:38:59.450138 1436700 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0131 02:38:59.450146 1436700 command_runner.go:130] > ]
	I0131 02:38:59.450157 1436700 command_runner.go:130] > # Additional environment variables to set for all the
	I0131 02:38:59.450168 1436700 command_runner.go:130] > # containers. These are overridden if set in the
	I0131 02:38:59.450176 1436700 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0131 02:38:59.450183 1436700 command_runner.go:130] > # default_env = [
	I0131 02:38:59.450186 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.450194 1436700 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0131 02:38:59.450198 1436700 command_runner.go:130] > # selinux = false
	I0131 02:38:59.450205 1436700 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0131 02:38:59.450214 1436700 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0131 02:38:59.450220 1436700 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0131 02:38:59.450226 1436700 command_runner.go:130] > # seccomp_profile = ""
	I0131 02:38:59.450232 1436700 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0131 02:38:59.450240 1436700 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0131 02:38:59.450253 1436700 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0131 02:38:59.450265 1436700 command_runner.go:130] > # which might increase security.
	I0131 02:38:59.450280 1436700 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0131 02:38:59.450294 1436700 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0131 02:38:59.450308 1436700 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0131 02:38:59.450322 1436700 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0131 02:38:59.450336 1436700 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0131 02:38:59.450345 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:38:59.450357 1436700 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0131 02:38:59.450371 1436700 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0131 02:38:59.450381 1436700 command_runner.go:130] > # the cgroup blockio controller.
	I0131 02:38:59.450388 1436700 command_runner.go:130] > # blockio_config_file = ""
	I0131 02:38:59.450395 1436700 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0131 02:38:59.450401 1436700 command_runner.go:130] > # irqbalance daemon.
	I0131 02:38:59.450407 1436700 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0131 02:38:59.450415 1436700 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0131 02:38:59.450421 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:38:59.450428 1436700 command_runner.go:130] > # rdt_config_file = ""
	I0131 02:38:59.450437 1436700 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0131 02:38:59.450445 1436700 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0131 02:38:59.450464 1436700 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0131 02:38:59.450473 1436700 command_runner.go:130] > # separate_pull_cgroup = ""
	I0131 02:38:59.450500 1436700 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0131 02:38:59.450516 1436700 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0131 02:38:59.450524 1436700 command_runner.go:130] > # will be added.
	I0131 02:38:59.450534 1436700 command_runner.go:130] > # default_capabilities = [
	I0131 02:38:59.450543 1436700 command_runner.go:130] > # 	"CHOWN",
	I0131 02:38:59.450552 1436700 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0131 02:38:59.450562 1436700 command_runner.go:130] > # 	"FSETID",
	I0131 02:38:59.450569 1436700 command_runner.go:130] > # 	"FOWNER",
	I0131 02:38:59.450573 1436700 command_runner.go:130] > # 	"SETGID",
	I0131 02:38:59.450577 1436700 command_runner.go:130] > # 	"SETUID",
	I0131 02:38:59.450586 1436700 command_runner.go:130] > # 	"SETPCAP",
	I0131 02:38:59.450595 1436700 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0131 02:38:59.450601 1436700 command_runner.go:130] > # 	"KILL",
	I0131 02:38:59.450613 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.450627 1436700 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0131 02:38:59.450639 1436700 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0131 02:38:59.450653 1436700 command_runner.go:130] > # default_sysctls = [
	I0131 02:38:59.450664 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.450673 1436700 command_runner.go:130] > # List of devices on the host that a
	I0131 02:38:59.450684 1436700 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0131 02:38:59.450692 1436700 command_runner.go:130] > # allowed_devices = [
	I0131 02:38:59.450699 1436700 command_runner.go:130] > # 	"/dev/fuse",
	I0131 02:38:59.450708 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.450717 1436700 command_runner.go:130] > # List of additional devices. specified as
	I0131 02:38:59.450742 1436700 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0131 02:38:59.450765 1436700 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0131 02:38:59.450830 1436700 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0131 02:38:59.450844 1436700 command_runner.go:130] > # additional_devices = [
	I0131 02:38:59.450850 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.450859 1436700 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0131 02:38:59.450868 1436700 command_runner.go:130] > # cdi_spec_dirs = [
	I0131 02:38:59.450875 1436700 command_runner.go:130] > # 	"/etc/cdi",
	I0131 02:38:59.450882 1436700 command_runner.go:130] > # 	"/var/run/cdi",
	I0131 02:38:59.450892 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.450906 1436700 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0131 02:38:59.450920 1436700 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0131 02:38:59.450931 1436700 command_runner.go:130] > # Defaults to false.
	I0131 02:38:59.450942 1436700 command_runner.go:130] > # device_ownership_from_security_context = false
	I0131 02:38:59.450956 1436700 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0131 02:38:59.450969 1436700 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0131 02:38:59.450976 1436700 command_runner.go:130] > # hooks_dir = [
	I0131 02:38:59.450988 1436700 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0131 02:38:59.450995 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.451007 1436700 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0131 02:38:59.451020 1436700 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0131 02:38:59.451031 1436700 command_runner.go:130] > # its default mounts from the following two files:
	I0131 02:38:59.451037 1436700 command_runner.go:130] > #
	I0131 02:38:59.451044 1436700 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0131 02:38:59.451058 1436700 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0131 02:38:59.451071 1436700 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0131 02:38:59.451077 1436700 command_runner.go:130] > #
	I0131 02:38:59.451091 1436700 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0131 02:38:59.451109 1436700 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0131 02:38:59.451128 1436700 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0131 02:38:59.451139 1436700 command_runner.go:130] > #      only add mounts it finds in this file.
	I0131 02:38:59.451144 1436700 command_runner.go:130] > #
	I0131 02:38:59.451151 1436700 command_runner.go:130] > # default_mounts_file = ""
	I0131 02:38:59.451164 1436700 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0131 02:38:59.451179 1436700 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0131 02:38:59.451189 1436700 command_runner.go:130] > pids_limit = 1024
	I0131 02:38:59.451200 1436700 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0131 02:38:59.451213 1436700 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0131 02:38:59.451225 1436700 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0131 02:38:59.451238 1436700 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0131 02:38:59.451251 1436700 command_runner.go:130] > # log_size_max = -1
	I0131 02:38:59.451268 1436700 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0131 02:38:59.451280 1436700 command_runner.go:130] > # log_to_journald = false
	I0131 02:38:59.451290 1436700 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0131 02:38:59.451302 1436700 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0131 02:38:59.451314 1436700 command_runner.go:130] > # Path to directory for container attach sockets.
	I0131 02:38:59.451325 1436700 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0131 02:38:59.451336 1436700 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0131 02:38:59.451344 1436700 command_runner.go:130] > # bind_mount_prefix = ""
	I0131 02:38:59.451357 1436700 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0131 02:38:59.451368 1436700 command_runner.go:130] > # read_only = false
	I0131 02:38:59.451379 1436700 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0131 02:38:59.451391 1436700 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0131 02:38:59.451400 1436700 command_runner.go:130] > # live configuration reload.
	I0131 02:38:59.451406 1436700 command_runner.go:130] > # log_level = "info"
	I0131 02:38:59.451419 1436700 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0131 02:38:59.451430 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:38:59.451439 1436700 command_runner.go:130] > # log_filter = ""
	I0131 02:38:59.451450 1436700 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0131 02:38:59.451462 1436700 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0131 02:38:59.451472 1436700 command_runner.go:130] > # separated by comma.
	I0131 02:38:59.451478 1436700 command_runner.go:130] > # uid_mappings = ""
	I0131 02:38:59.451493 1436700 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0131 02:38:59.451506 1436700 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0131 02:38:59.451520 1436700 command_runner.go:130] > # separated by comma.
	I0131 02:38:59.451530 1436700 command_runner.go:130] > # gid_mappings = ""
	I0131 02:38:59.451542 1436700 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0131 02:38:59.451555 1436700 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0131 02:38:59.451567 1436700 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0131 02:38:59.451576 1436700 command_runner.go:130] > # minimum_mappable_uid = -1
	I0131 02:38:59.451586 1436700 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0131 02:38:59.451599 1436700 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0131 02:38:59.451609 1436700 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0131 02:38:59.451616 1436700 command_runner.go:130] > # minimum_mappable_gid = -1
	I0131 02:38:59.451627 1436700 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0131 02:38:59.451640 1436700 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0131 02:38:59.451651 1436700 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0131 02:38:59.451661 1436700 command_runner.go:130] > # ctr_stop_timeout = 30
	I0131 02:38:59.451670 1436700 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0131 02:38:59.451683 1436700 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0131 02:38:59.451690 1436700 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0131 02:38:59.451701 1436700 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0131 02:38:59.451715 1436700 command_runner.go:130] > drop_infra_ctr = false
	I0131 02:38:59.451729 1436700 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0131 02:38:59.451741 1436700 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0131 02:38:59.451756 1436700 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0131 02:38:59.451766 1436700 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0131 02:38:59.451776 1436700 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0131 02:38:59.451788 1436700 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0131 02:38:59.451795 1436700 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0131 02:38:59.451806 1436700 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0131 02:38:59.451817 1436700 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0131 02:38:59.451827 1436700 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0131 02:38:59.451842 1436700 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0131 02:38:59.451853 1436700 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0131 02:38:59.451866 1436700 command_runner.go:130] > # default_runtime = "runc"
	I0131 02:38:59.451877 1436700 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0131 02:38:59.451889 1436700 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0131 02:38:59.451907 1436700 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0131 02:38:59.451918 1436700 command_runner.go:130] > # creation as a file is not desired either.
	I0131 02:38:59.451937 1436700 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0131 02:38:59.451949 1436700 command_runner.go:130] > # the hostname is being managed dynamically.
	I0131 02:38:59.451957 1436700 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0131 02:38:59.451965 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.451976 1436700 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0131 02:38:59.451987 1436700 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0131 02:38:59.452000 1436700 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0131 02:38:59.452014 1436700 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0131 02:38:59.452019 1436700 command_runner.go:130] > #
	I0131 02:38:59.452027 1436700 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0131 02:38:59.452038 1436700 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0131 02:38:59.452045 1436700 command_runner.go:130] > #  runtime_type = "oci"
	I0131 02:38:59.452056 1436700 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0131 02:38:59.452065 1436700 command_runner.go:130] > #  privileged_without_host_devices = false
	I0131 02:38:59.452075 1436700 command_runner.go:130] > #  allowed_annotations = []
	I0131 02:38:59.452081 1436700 command_runner.go:130] > # Where:
	I0131 02:38:59.452094 1436700 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0131 02:38:59.452111 1436700 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0131 02:38:59.452134 1436700 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0131 02:38:59.452147 1436700 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0131 02:38:59.452156 1436700 command_runner.go:130] > #   in $PATH.
	I0131 02:38:59.452166 1436700 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0131 02:38:59.452177 1436700 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0131 02:38:59.452191 1436700 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0131 02:38:59.452198 1436700 command_runner.go:130] > #   state.
	I0131 02:38:59.452212 1436700 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0131 02:38:59.452224 1436700 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0131 02:38:59.452236 1436700 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0131 02:38:59.452248 1436700 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0131 02:38:59.452262 1436700 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0131 02:38:59.452276 1436700 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0131 02:38:59.452287 1436700 command_runner.go:130] > #   The currently recognized values are:
	I0131 02:38:59.452300 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0131 02:38:59.452312 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0131 02:38:59.452325 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0131 02:38:59.452338 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0131 02:38:59.452357 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0131 02:38:59.452370 1436700 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0131 02:38:59.452383 1436700 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0131 02:38:59.452398 1436700 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0131 02:38:59.452409 1436700 command_runner.go:130] > #   should be moved to the container's cgroup
	I0131 02:38:59.452417 1436700 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0131 02:38:59.452429 1436700 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0131 02:38:59.452439 1436700 command_runner.go:130] > runtime_type = "oci"
	I0131 02:38:59.452447 1436700 command_runner.go:130] > runtime_root = "/run/runc"
	I0131 02:38:59.452457 1436700 command_runner.go:130] > runtime_config_path = ""
	I0131 02:38:59.452467 1436700 command_runner.go:130] > monitor_path = ""
	I0131 02:38:59.452475 1436700 command_runner.go:130] > monitor_cgroup = ""
	I0131 02:38:59.452486 1436700 command_runner.go:130] > monitor_exec_cgroup = ""
	I0131 02:38:59.452496 1436700 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0131 02:38:59.452501 1436700 command_runner.go:130] > # running containers
	I0131 02:38:59.452506 1436700 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0131 02:38:59.452515 1436700 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0131 02:38:59.452563 1436700 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0131 02:38:59.452577 1436700 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0131 02:38:59.452590 1436700 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0131 02:38:59.452599 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0131 02:38:59.452611 1436700 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0131 02:38:59.452622 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0131 02:38:59.452633 1436700 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0131 02:38:59.452641 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0131 02:38:59.452656 1436700 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0131 02:38:59.452668 1436700 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0131 02:38:59.452683 1436700 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0131 02:38:59.452696 1436700 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0131 02:38:59.452715 1436700 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0131 02:38:59.452728 1436700 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0131 02:38:59.452746 1436700 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0131 02:38:59.452757 1436700 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0131 02:38:59.452770 1436700 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0131 02:38:59.452785 1436700 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0131 02:38:59.452795 1436700 command_runner.go:130] > # Example:
	I0131 02:38:59.452807 1436700 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0131 02:38:59.452819 1436700 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0131 02:38:59.452831 1436700 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0131 02:38:59.452843 1436700 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0131 02:38:59.452850 1436700 command_runner.go:130] > # cpuset = 0
	I0131 02:38:59.452855 1436700 command_runner.go:130] > # cpushares = "0-1"
	I0131 02:38:59.452863 1436700 command_runner.go:130] > # Where:
	I0131 02:38:59.452875 1436700 command_runner.go:130] > # The workload name is workload-type.
	I0131 02:38:59.452888 1436700 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0131 02:38:59.452901 1436700 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0131 02:38:59.452913 1436700 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0131 02:38:59.452929 1436700 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0131 02:38:59.452940 1436700 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0131 02:38:59.452944 1436700 command_runner.go:130] > # 
	I0131 02:38:59.452959 1436700 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0131 02:38:59.452969 1436700 command_runner.go:130] > #
	I0131 02:38:59.452981 1436700 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0131 02:38:59.452995 1436700 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0131 02:38:59.453012 1436700 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0131 02:38:59.453026 1436700 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0131 02:38:59.453038 1436700 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0131 02:38:59.453045 1436700 command_runner.go:130] > [crio.image]
	I0131 02:38:59.453055 1436700 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0131 02:38:59.453066 1436700 command_runner.go:130] > # default_transport = "docker://"
	I0131 02:38:59.453080 1436700 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0131 02:38:59.453093 1436700 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0131 02:38:59.453104 1436700 command_runner.go:130] > # global_auth_file = ""
	I0131 02:38:59.453116 1436700 command_runner.go:130] > # The image used to instantiate infra containers.
	I0131 02:38:59.453130 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:38:59.453140 1436700 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0131 02:38:59.453154 1436700 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0131 02:38:59.453169 1436700 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0131 02:38:59.453180 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:38:59.453190 1436700 command_runner.go:130] > # pause_image_auth_file = ""
	I0131 02:38:59.453201 1436700 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0131 02:38:59.453212 1436700 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0131 02:38:59.453224 1436700 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0131 02:38:59.453237 1436700 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0131 02:38:59.453248 1436700 command_runner.go:130] > # pause_command = "/pause"
	I0131 02:38:59.453259 1436700 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0131 02:38:59.453273 1436700 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0131 02:38:59.453286 1436700 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0131 02:38:59.453300 1436700 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0131 02:38:59.453309 1436700 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0131 02:38:59.453316 1436700 command_runner.go:130] > # signature_policy = ""
	I0131 02:38:59.453322 1436700 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0131 02:38:59.453331 1436700 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0131 02:38:59.453338 1436700 command_runner.go:130] > # changing them here.
	I0131 02:38:59.453345 1436700 command_runner.go:130] > # insecure_registries = [
	I0131 02:38:59.453351 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.453362 1436700 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0131 02:38:59.453371 1436700 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0131 02:38:59.453378 1436700 command_runner.go:130] > # image_volumes = "mkdir"
	I0131 02:38:59.453387 1436700 command_runner.go:130] > # Temporary directory to use for storing big files
	I0131 02:38:59.453400 1436700 command_runner.go:130] > # big_files_temporary_dir = ""
	I0131 02:38:59.453407 1436700 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0131 02:38:59.453411 1436700 command_runner.go:130] > # CNI plugins.
	I0131 02:38:59.453422 1436700 command_runner.go:130] > [crio.network]
	I0131 02:38:59.453433 1436700 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0131 02:38:59.453445 1436700 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0131 02:38:59.453455 1436700 command_runner.go:130] > # cni_default_network = ""
	I0131 02:38:59.453466 1436700 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0131 02:38:59.453477 1436700 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0131 02:38:59.453488 1436700 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0131 02:38:59.453494 1436700 command_runner.go:130] > # plugin_dirs = [
	I0131 02:38:59.453500 1436700 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0131 02:38:59.453509 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.453519 1436700 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0131 02:38:59.453529 1436700 command_runner.go:130] > [crio.metrics]
	I0131 02:38:59.453540 1436700 command_runner.go:130] > # Globally enable or disable metrics support.
	I0131 02:38:59.453551 1436700 command_runner.go:130] > enable_metrics = true
	I0131 02:38:59.453561 1436700 command_runner.go:130] > # Specify enabled metrics collectors.
	I0131 02:38:59.453578 1436700 command_runner.go:130] > # Per default all metrics are enabled.
	I0131 02:38:59.453591 1436700 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0131 02:38:59.453604 1436700 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0131 02:38:59.453618 1436700 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0131 02:38:59.453625 1436700 command_runner.go:130] > # metrics_collectors = [
	I0131 02:38:59.453635 1436700 command_runner.go:130] > # 	"operations",
	I0131 02:38:59.453644 1436700 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0131 02:38:59.453655 1436700 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0131 02:38:59.453666 1436700 command_runner.go:130] > # 	"operations_errors",
	I0131 02:38:59.453674 1436700 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0131 02:38:59.453684 1436700 command_runner.go:130] > # 	"image_pulls_by_name",
	I0131 02:38:59.453690 1436700 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0131 02:38:59.453695 1436700 command_runner.go:130] > # 	"image_pulls_failures",
	I0131 02:38:59.453705 1436700 command_runner.go:130] > # 	"image_pulls_successes",
	I0131 02:38:59.453714 1436700 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0131 02:38:59.453725 1436700 command_runner.go:130] > # 	"image_layer_reuse",
	I0131 02:38:59.453735 1436700 command_runner.go:130] > # 	"containers_oom_total",
	I0131 02:38:59.453743 1436700 command_runner.go:130] > # 	"containers_oom",
	I0131 02:38:59.453754 1436700 command_runner.go:130] > # 	"processes_defunct",
	I0131 02:38:59.453765 1436700 command_runner.go:130] > # 	"operations_total",
	I0131 02:38:59.453774 1436700 command_runner.go:130] > # 	"operations_latency_seconds",
	I0131 02:38:59.453779 1436700 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0131 02:38:59.453789 1436700 command_runner.go:130] > # 	"operations_errors_total",
	I0131 02:38:59.453798 1436700 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0131 02:38:59.453810 1436700 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0131 02:38:59.453821 1436700 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0131 02:38:59.453831 1436700 command_runner.go:130] > # 	"image_pulls_success_total",
	I0131 02:38:59.453841 1436700 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0131 02:38:59.453849 1436700 command_runner.go:130] > # 	"containers_oom_count_total",
	I0131 02:38:59.453858 1436700 command_runner.go:130] > # ]
	I0131 02:38:59.453868 1436700 command_runner.go:130] > # The port on which the metrics server will listen.
	I0131 02:38:59.453878 1436700 command_runner.go:130] > # metrics_port = 9090
	I0131 02:38:59.453889 1436700 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0131 02:38:59.453899 1436700 command_runner.go:130] > # metrics_socket = ""
	I0131 02:38:59.453909 1436700 command_runner.go:130] > # The certificate for the secure metrics server.
	I0131 02:38:59.453922 1436700 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0131 02:38:59.453939 1436700 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0131 02:38:59.453951 1436700 command_runner.go:130] > # certificate on any modification event.
	I0131 02:38:59.453961 1436700 command_runner.go:130] > # metrics_cert = ""
	I0131 02:38:59.453971 1436700 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0131 02:38:59.453979 1436700 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0131 02:38:59.453987 1436700 command_runner.go:130] > # metrics_key = ""
	I0131 02:38:59.454000 1436700 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0131 02:38:59.454007 1436700 command_runner.go:130] > [crio.tracing]
	I0131 02:38:59.454020 1436700 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0131 02:38:59.454030 1436700 command_runner.go:130] > # enable_tracing = false
	I0131 02:38:59.454041 1436700 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0131 02:38:59.454048 1436700 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0131 02:38:59.454060 1436700 command_runner.go:130] > # Number of samples to collect per million spans.
	I0131 02:38:59.454076 1436700 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0131 02:38:59.454090 1436700 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0131 02:38:59.454100 1436700 command_runner.go:130] > [crio.stats]
	I0131 02:38:59.454113 1436700 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0131 02:38:59.454130 1436700 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0131 02:38:59.454142 1436700 command_runner.go:130] > # stats_collection_period = 0
	I0131 02:38:59.454179 1436700 command_runner.go:130] ! time="2024-01-31 02:38:59.397993754Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0131 02:38:59.454201 1436700 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0131 02:38:59.454293 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:38:59.454305 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:38:59.454325 1436700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 02:38:59.454367 1436700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-263108 NodeName:multinode-263108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 02:38:59.454554 1436700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-263108"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 02:38:59.454629 1436700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-263108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 02:38:59.454680 1436700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 02:38:59.463606 1436700 command_runner.go:130] > kubeadm
	I0131 02:38:59.463622 1436700 command_runner.go:130] > kubectl
	I0131 02:38:59.463627 1436700 command_runner.go:130] > kubelet
	I0131 02:38:59.463651 1436700 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 02:38:59.463705 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 02:38:59.471913 1436700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0131 02:38:59.485949 1436700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 02:38:59.500088 1436700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0131 02:38:59.515086 1436700 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0131 02:38:59.518545 1436700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:38:59.528803 1436700 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108 for IP: 192.168.39.109
	I0131 02:38:59.528838 1436700 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:38:59.529055 1436700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 02:38:59.529140 1436700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 02:38:59.529236 1436700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key
	I0131 02:38:59.529328 1436700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/apiserver.key.0b15c42e
	I0131 02:38:59.529388 1436700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/proxy-client.key
	I0131 02:38:59.529403 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0131 02:38:59.529423 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0131 02:38:59.529441 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0131 02:38:59.529462 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0131 02:38:59.529482 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0131 02:38:59.529500 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0131 02:38:59.529517 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0131 02:38:59.529535 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0131 02:38:59.529612 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 02:38:59.529652 1436700 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 02:38:59.529667 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 02:38:59.529698 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 02:38:59.529739 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 02:38:59.529782 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 02:38:59.529836 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:38:59.529875 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /usr/share/ca-certificates/14199762.pem
	I0131 02:38:59.529894 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:38:59.529910 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem -> /usr/share/ca-certificates/1419976.pem
	I0131 02:38:59.530627 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 02:38:59.551507 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 02:38:59.571525 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 02:38:59.592479 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 02:38:59.613080 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 02:38:59.633488 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 02:38:59.653197 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 02:38:59.673515 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 02:38:59.694060 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 02:38:59.713790 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 02:38:59.733107 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 02:38:59.753178 1436700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 02:38:59.767906 1436700 ssh_runner.go:195] Run: openssl version
	I0131 02:38:59.773113 1436700 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0131 02:38:59.773224 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 02:38:59.782775 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:38:59.786835 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:38:59.787049 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:38:59.787110 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:38:59.791758 1436700 command_runner.go:130] > b5213941
	I0131 02:38:59.792001 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 02:38:59.801679 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 02:38:59.811360 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 02:38:59.815339 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:38:59.815540 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:38:59.815605 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 02:38:59.820429 1436700 command_runner.go:130] > 51391683
	I0131 02:38:59.820737 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 02:38:59.830148 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 02:38:59.839435 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 02:38:59.843260 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:38:59.843633 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:38:59.843692 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 02:38:59.848550 1436700 command_runner.go:130] > 3ec20f2e
	I0131 02:38:59.848608 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 02:38:59.857857 1436700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 02:38:59.861720 1436700 command_runner.go:130] > ca.crt
	I0131 02:38:59.861738 1436700 command_runner.go:130] > ca.key
	I0131 02:38:59.861743 1436700 command_runner.go:130] > healthcheck-client.crt
	I0131 02:38:59.861747 1436700 command_runner.go:130] > healthcheck-client.key
	I0131 02:38:59.861752 1436700 command_runner.go:130] > peer.crt
	I0131 02:38:59.861756 1436700 command_runner.go:130] > peer.key
	I0131 02:38:59.861759 1436700 command_runner.go:130] > server.crt
	I0131 02:38:59.861763 1436700 command_runner.go:130] > server.key
	I0131 02:38:59.861809 1436700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 02:38:59.867048 1436700 command_runner.go:130] > Certificate will not expire
	I0131 02:38:59.867112 1436700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 02:38:59.871936 1436700 command_runner.go:130] > Certificate will not expire
	I0131 02:38:59.872201 1436700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 02:38:59.877004 1436700 command_runner.go:130] > Certificate will not expire
	I0131 02:38:59.877232 1436700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 02:38:59.881986 1436700 command_runner.go:130] > Certificate will not expire
	I0131 02:38:59.882284 1436700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 02:38:59.887443 1436700 command_runner.go:130] > Certificate will not expire
	I0131 02:38:59.887562 1436700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 02:38:59.892892 1436700 command_runner.go:130] > Certificate will not expire
	I0131 02:38:59.892990 1436700 kubeadm.go:404] StartCluster: {Name:multinode-263108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:38:59.893143 1436700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 02:38:59.893196 1436700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 02:38:59.927437 1436700 cri.go:89] found id: ""
	I0131 02:38:59.927546 1436700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 02:38:59.936961 1436700 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0131 02:38:59.936991 1436700 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0131 02:38:59.937003 1436700 command_runner.go:130] > /var/lib/minikube/etcd:
	I0131 02:38:59.937010 1436700 command_runner.go:130] > member
	I0131 02:38:59.937037 1436700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 02:38:59.937085 1436700 kubeadm.go:636] restartCluster start
	I0131 02:38:59.937159 1436700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 02:38:59.945720 1436700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:38:59.946282 1436700 kubeconfig.go:92] found "multinode-263108" server: "https://192.168.39.109:8443"
	I0131 02:38:59.946907 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:38:59.947176 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:38:59.947799 1436700 cert_rotation.go:137] Starting client certificate rotation controller
	I0131 02:38:59.948161 1436700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 02:38:59.957009 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:38:59.957075 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:38:59.968383 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:00.457940 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:00.458021 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:00.469136 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:00.957655 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:00.957771 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:00.969200 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:01.458143 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:01.458247 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:01.469864 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:01.957361 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:01.957489 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:01.968974 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:02.457498 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:02.457641 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:02.468875 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:02.957413 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:02.957520 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:02.969158 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:03.457780 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:03.457860 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:03.469517 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:03.957056 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:03.957162 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:03.968845 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:04.458044 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:04.458170 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:04.470614 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:04.958101 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:04.958262 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:04.969709 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:05.457232 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:05.457338 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:05.468544 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:05.957292 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:05.957418 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:05.969085 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:06.457908 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:06.458000 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:06.469278 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:06.957929 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:06.958059 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:06.969415 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:07.457961 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:07.458074 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:07.469176 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:07.957841 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:07.957946 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:07.969729 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:08.457238 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:08.457336 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:08.468883 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:08.958031 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:08.958114 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:08.969591 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:09.457744 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:09.457823 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:09.469012 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:09.957828 1436700 api_server.go:166] Checking apiserver status ...
	I0131 02:39:09.957908 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:39:09.969470 1436700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:39:09.969508 1436700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 02:39:09.969521 1436700 kubeadm.go:1135] stopping kube-system containers ...
	I0131 02:39:09.969535 1436700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 02:39:09.969615 1436700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 02:39:10.006149 1436700 cri.go:89] found id: ""
	I0131 02:39:10.006259 1436700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 02:39:10.021308 1436700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 02:39:10.030663 1436700 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0131 02:39:10.030701 1436700 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0131 02:39:10.030713 1436700 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0131 02:39:10.030725 1436700 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 02:39:10.030770 1436700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 02:39:10.030825 1436700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 02:39:10.039453 1436700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 02:39:10.039485 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:39:10.152451 1436700 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 02:39:10.152479 1436700 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0131 02:39:10.152485 1436700 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0131 02:39:10.152492 1436700 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 02:39:10.152500 1436700 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0131 02:39:10.152510 1436700 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0131 02:39:10.152519 1436700 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0131 02:39:10.152527 1436700 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0131 02:39:10.152549 1436700 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0131 02:39:10.152558 1436700 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 02:39:10.152572 1436700 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 02:39:10.152586 1436700 command_runner.go:130] > [certs] Using the existing "sa" key
	I0131 02:39:10.152623 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:39:10.199797 1436700 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 02:39:10.435584 1436700 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 02:39:10.762970 1436700 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 02:39:11.001694 1436700 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 02:39:11.141322 1436700 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 02:39:11.143917 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:39:11.314902 1436700 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 02:39:11.314939 1436700 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 02:39:11.314948 1436700 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0131 02:39:11.314979 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:39:11.418018 1436700 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 02:39:11.418054 1436700 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 02:39:11.418065 1436700 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 02:39:11.418076 1436700 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 02:39:11.418112 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:39:11.479288 1436700 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 02:39:11.483801 1436700 api_server.go:52] waiting for apiserver process to appear ...
	I0131 02:39:11.483904 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:39:11.984785 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:39:12.484219 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:39:12.984215 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:39:13.484371 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:39:13.505398 1436700 command_runner.go:130] > 1089
	I0131 02:39:13.505439 1436700 api_server.go:72] duration metric: took 2.021651035s to wait for apiserver process to appear ...
	I0131 02:39:13.505448 1436700 api_server.go:88] waiting for apiserver healthz status ...
	I0131 02:39:13.505474 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:13.506037 1436700 api_server.go:269] stopped: https://192.168.39.109:8443/healthz: Get "https://192.168.39.109:8443/healthz": dial tcp 192.168.39.109:8443: connect: connection refused
	I0131 02:39:14.005667 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:17.295795 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 02:39:17.295826 1436700 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 02:39:17.295854 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:17.360428 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 02:39:17.360459 1436700 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 02:39:17.505628 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:17.510522 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 02:39:17.510558 1436700 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 02:39:18.006122 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:18.010909 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 02:39:18.010937 1436700 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 02:39:18.506606 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:18.516357 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 02:39:18.516391 1436700 api_server.go:103] status: https://192.168.39.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 02:39:19.005897 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:19.011102 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0131 02:39:19.011209 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/version
	I0131 02:39:19.011215 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:19.011224 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:19.011230 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:19.020222 1436700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0131 02:39:19.020245 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:19.020252 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:19.020258 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:19.020263 1436700 round_trippers.go:580]     Content-Length: 264
	I0131 02:39:19.020273 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:18 GMT
	I0131 02:39:19.020281 1436700 round_trippers.go:580]     Audit-Id: c65519c2-d8d1-4e75-9e93-c41c9f4e04e4
	I0131 02:39:19.020288 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:19.020297 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:19.020319 1436700 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0131 02:39:19.020432 1436700 api_server.go:141] control plane version: v1.28.4
	I0131 02:39:19.020451 1436700 api_server.go:131] duration metric: took 5.514996396s to wait for apiserver health ...
	I0131 02:39:19.020460 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:39:19.020465 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:39:19.022690 1436700 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0131 02:39:19.024180 1436700 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0131 02:39:19.031140 1436700 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0131 02:39:19.031167 1436700 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0131 02:39:19.031178 1436700 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0131 02:39:19.031189 1436700 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0131 02:39:19.031198 1436700 command_runner.go:130] > Access: 2024-01-31 02:38:46.128809878 +0000
	I0131 02:39:19.031206 1436700 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0131 02:39:19.031215 1436700 command_runner.go:130] > Change: 2024-01-31 02:38:44.179809878 +0000
	I0131 02:39:19.031225 1436700 command_runner.go:130] >  Birth: -
	I0131 02:39:19.031447 1436700 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0131 02:39:19.031463 1436700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0131 02:39:19.053167 1436700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0131 02:39:20.003119 1436700 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0131 02:39:20.008440 1436700 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0131 02:39:20.012365 1436700 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0131 02:39:20.033025 1436700 command_runner.go:130] > daemonset.apps/kindnet configured
	I0131 02:39:20.036570 1436700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 02:39:20.036707 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:20.036716 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.036724 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.036730 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.040810 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:39:20.040838 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.040846 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.040851 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.040856 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.040863 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.040868 1436700 round_trippers.go:580]     Audit-Id: a0d416b4-6ee9-4054-afea-ca74d35d73ae
	I0131 02:39:20.040875 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.041692 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"838"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82677 chars]
	I0131 02:39:20.045604 1436700 system_pods.go:59] 12 kube-system pods found
	I0131 02:39:20.045635 1436700 system_pods.go:61] "coredns-5dd5756b68-skqw4" [713e1df7-54be-4322-986d-b6d7db88c1c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 02:39:20.045644 1436700 system_pods.go:61] "etcd-multinode-263108" [cf8c4ba5-fce9-4570-a204-0b713281fc21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 02:39:20.045648 1436700 system_pods.go:61] "kindnet-88m7n" [afe9a549-0baf-4f87-8582-7cd758b8192d] Running
	I0131 02:39:20.045654 1436700 system_pods.go:61] "kindnet-knvl8" [8e734b81-4d44-4c96-8439-0ef800021bf8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0131 02:39:20.045660 1436700 system_pods.go:61] "kindnet-zvrh5" [2b89787d-5c3c-48e6-aecc-441c99cd1017] Running
	I0131 02:39:20.045668 1436700 system_pods.go:61] "kube-apiserver-multinode-263108" [0c527200-696b-4681-af91-226016437113] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 02:39:20.045681 1436700 system_pods.go:61] "kube-controller-manager-multinode-263108" [056ea293-6261-4e6c-9b3f-9fdc7d0727a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 02:39:20.045697 1436700 system_pods.go:61] "kube-proxy-mpxjh" [3a11b226-7a8e-4b25-a409-acc439d4bdfb] Running
	I0131 02:39:20.045704 1436700 system_pods.go:61] "kube-proxy-x5jb7" [4dc3dae9-7781-4832-88ba-08a17ecfe557] Running
	I0131 02:39:20.045710 1436700 system_pods.go:61] "kube-proxy-x85lz" [36e014b9-154e-43f4-b694-7f05bd31baef] Running
	I0131 02:39:20.045718 1436700 system_pods.go:61] "kube-scheduler-multinode-263108" [7cc8534f-0f2b-457e-9942-e49d0f507875] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 02:39:20.045724 1436700 system_pods.go:61] "storage-provisioner" [eaba2b6b-2a00-4af9-bdb8-67d110b3eb19] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 02:39:20.045743 1436700 system_pods.go:74] duration metric: took 9.14944ms to wait for pod list to return data ...
	I0131 02:39:20.045753 1436700 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:39:20.045831 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0131 02:39:20.045840 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.045847 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.045854 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.049768 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:20.049803 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.049813 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.049819 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.049824 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.049830 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.049839 1436700 round_trippers.go:580]     Audit-Id: 88dbc722-1524-4fa0-b7d2-2e59e734ef0d
	I0131 02:39:20.049848 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.050450 1436700 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"838"},"items":[{"metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16354 chars]
	I0131 02:39:20.051251 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:39:20.051298 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:39:20.051309 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:39:20.051313 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:39:20.051317 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:39:20.051320 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:39:20.051324 1436700 node_conditions.go:105] duration metric: took 5.56741ms to run NodePressure ...
	I0131 02:39:20.051345 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:39:20.230972 1436700 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0131 02:39:20.293720 1436700 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0131 02:39:20.295323 1436700 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 02:39:20.295489 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0131 02:39:20.295507 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.295518 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.295530 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.298185 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:20.298210 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.298220 1436700 round_trippers.go:580]     Audit-Id: 14b5c535-c3d4-4ab8-a6b4-1b2da2dcbdfa
	I0131 02:39:20.298235 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.298248 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.298256 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.298267 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.298278 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.298830 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"840"},"items":[{"metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0131 02:39:20.299862 1436700 kubeadm.go:787] kubelet initialised
	I0131 02:39:20.299881 1436700 kubeadm.go:788] duration metric: took 4.537122ms waiting for restarted kubelet to initialise ...
	I0131 02:39:20.299888 1436700 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:39:20.299942 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:20.299953 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.299961 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.299969 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.302888 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:20.302907 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.302917 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.302925 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.302932 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.302940 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.302947 1436700 round_trippers.go:580]     Audit-Id: 091908ca-fb7a-4d2c-9d92-c9f01cff3a6a
	I0131 02:39:20.302954 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.304424 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"840"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82677 chars]
	I0131 02:39:20.308000 1436700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:20.308121 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:20.308133 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.308145 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.308156 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.310420 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:20.310441 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.310451 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.310462 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.310472 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.310495 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.310514 1436700 round_trippers.go:580]     Audit-Id: 46eaf837-48af-4982-ae1c-f3340a4201ac
	I0131 02:39:20.310524 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.310712 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:20.311221 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:20.311239 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.311246 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.311252 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.313204 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:20.313225 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.313235 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.313243 1436700 round_trippers.go:580]     Audit-Id: a872455a-f524-4df0-9be9-dd8d821b0ee8
	I0131 02:39:20.313255 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.313263 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.313272 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.313279 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.313544 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:20.313936 1436700 pod_ready.go:97] node "multinode-263108" hosting pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.313956 1436700 pod_ready.go:81] duration metric: took 5.931483ms waiting for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	E0131 02:39:20.313964 1436700 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-263108" hosting pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.313979 1436700 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:20.314030 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:20.314037 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.314044 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.314050 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.316014 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:20.316045 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.316055 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.316065 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.316073 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.316088 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.316097 1436700 round_trippers.go:580]     Audit-Id: 99f7ead2-1429-41dc-aef4-1ca77442b268
	I0131 02:39:20.316112 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.316238 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:20.316648 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:20.316661 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.316668 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.316676 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.318411 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:20.318429 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.318439 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.318447 1436700 round_trippers.go:580]     Audit-Id: d29eb8be-20f2-4b81-9ba3-2031493f9480
	I0131 02:39:20.318455 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.318462 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.318472 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.318490 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.318641 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:20.319011 1436700 pod_ready.go:97] node "multinode-263108" hosting pod "etcd-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.319030 1436700 pod_ready.go:81] duration metric: took 5.045838ms waiting for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	E0131 02:39:20.319038 1436700 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-263108" hosting pod "etcd-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.319061 1436700 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:20.319110 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-263108
	I0131 02:39:20.319116 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.319123 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.319131 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.320803 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:20.320823 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.320833 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.320841 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.320853 1436700 round_trippers.go:580]     Audit-Id: c791a8fa-a7cf-43cd-ab9c-aedd4f28b14a
	I0131 02:39:20.320860 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.320870 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.320881 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.321078 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-263108","namespace":"kube-system","uid":"0c527200-696b-4681-af91-226016437113","resourceVersion":"788","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.109:8443","kubernetes.io/config.hash":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.mirror":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.seen":"2024-01-31T02:28:18.078204875Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0131 02:39:20.321522 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:20.321538 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.321545 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.321550 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.323172 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:20.323190 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.323196 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.323202 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.323207 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.323212 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.323217 1436700 round_trippers.go:580]     Audit-Id: 9ec9c9c9-e0c6-4244-8511-934a2254b9f1
	I0131 02:39:20.323223 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.323426 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:20.323741 1436700 pod_ready.go:97] node "multinode-263108" hosting pod "kube-apiserver-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.323760 1436700 pod_ready.go:81] duration metric: took 4.691161ms waiting for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	E0131 02:39:20.323768 1436700 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-263108" hosting pod "kube-apiserver-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.323774 1436700 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:20.323828 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-263108
	I0131 02:39:20.323836 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.323843 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.323848 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.325475 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:20.325497 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.325503 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.325515 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.325531 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.325539 1436700 round_trippers.go:580]     Audit-Id: f045c8f3-fde1-4869-8717-3edc5d468fe8
	I0131 02:39:20.325547 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.325557 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.325783 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-263108","namespace":"kube-system","uid":"056ea293-6261-4e6c-9b3f-9fdc7d0727a2","resourceVersion":"787","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.mirror":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.seen":"2024-01-31T02:28:18.078205997Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0131 02:39:20.437534 1436700 request.go:629] Waited for 111.254084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:20.437656 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:20.437665 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.437679 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.437690 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.441032 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:20.441060 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.441072 1436700 round_trippers.go:580]     Audit-Id: 3c274853-90a0-4bdb-876f-2d05c93b17a2
	I0131 02:39:20.441081 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.441089 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.441096 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.441104 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.441113 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.441384 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:20.441764 1436700 pod_ready.go:97] node "multinode-263108" hosting pod "kube-controller-manager-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.441787 1436700 pod_ready.go:81] duration metric: took 118.004095ms waiting for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	E0131 02:39:20.441799 1436700 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-263108" hosting pod "kube-controller-manager-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:20.441809 1436700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:20.637360 1436700 request.go:629] Waited for 195.415305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:39:20.637435 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:39:20.637447 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.637459 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.637469 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.641004 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:20.641024 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.641031 1436700 round_trippers.go:580]     Audit-Id: 73dad9c7-5db0-47a7-b15f-540ef29a823a
	I0131 02:39:20.641037 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.641042 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.641046 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.641051 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.641056 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.641294 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mpxjh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3a11b226-7a8e-4b25-a409-acc439d4bdfb","resourceVersion":"759","creationTimestamp":"2024-01-31T02:30:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0131 02:39:20.837201 1436700 request.go:629] Waited for 195.402162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:39:20.837288 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:39:20.837296 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:20.837304 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:20.837310 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:20.841553 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:39:20.841586 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:20.841598 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:20.841607 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:20.841620 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:20.841628 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:20.841637 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:20 GMT
	I0131 02:39:20.841648 1436700 round_trippers.go:580]     Audit-Id: 25f4b96b-92d2-46b1-9013-704e5ec19d24
	I0131 02:39:20.841773 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"5d8d8dfa-72be-4459-b7bc-217aef0cc608","resourceVersion":"785","creationTimestamp":"2024-01-31T02:31:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_31_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:31:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I0131 02:39:20.842190 1436700 pod_ready.go:92] pod "kube-proxy-mpxjh" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:20.842222 1436700 pod_ready.go:81] duration metric: took 400.40423ms waiting for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:20.842236 1436700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:21.037337 1436700 request.go:629] Waited for 194.996937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:39:21.037435 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:39:21.037447 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:21.037459 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:21.037473 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:21.041186 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:21.041212 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:21.041223 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:21.041232 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:21.041241 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:21.041262 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:21 GMT
	I0131 02:39:21.041277 1436700 round_trippers.go:580]     Audit-Id: 98b1838d-60b5-49ef-bc32-9f6b12f6478a
	I0131 02:39:21.041285 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:21.041535 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x5jb7","generateName":"kube-proxy-","namespace":"kube-system","uid":"4dc3dae9-7781-4832-88ba-08a17ecfe557","resourceVersion":"554","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0131 02:39:21.237774 1436700 request.go:629] Waited for 195.572265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:39:21.237870 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:39:21.237884 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:21.237896 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:21.237908 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:21.240257 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:21.240282 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:21.240291 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:21.240299 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:21.240306 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:21.240314 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:21.240323 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:21 GMT
	I0131 02:39:21.240332 1436700 round_trippers.go:580]     Audit-Id: 36128ce8-bfc9-4521-9368-d97164fdfbbf
	I0131 02:39:21.240495 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m02","uid":"33ce8eca-eb98-4b22-953c-97e57c604ffc","resourceVersion":"782","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_31_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0131 02:39:21.240919 1436700 pod_ready.go:92] pod "kube-proxy-x5jb7" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:21.240945 1436700 pod_ready.go:81] duration metric: took 398.701371ms waiting for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:21.240957 1436700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:21.436802 1436700 request.go:629] Waited for 195.747355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:39:21.436896 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:39:21.436908 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:21.436919 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:21.436929 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:21.439338 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:21.439359 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:21.439373 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:21.439381 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:21.439389 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:21.439396 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:21.439405 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:21 GMT
	I0131 02:39:21.439417 1436700 round_trippers.go:580]     Audit-Id: f9677f26-9c01-41c4-bab7-0b03d14145ed
	I0131 02:39:21.439522 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x85lz","generateName":"kube-proxy-","namespace":"kube-system","uid":"36e014b9-154e-43f4-b694-7f05bd31baef","resourceVersion":"837","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0131 02:39:21.637330 1436700 request.go:629] Waited for 197.361199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:21.637417 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:21.637428 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:21.637442 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:21.637456 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:21.639766 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:21.639788 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:21.639805 1436700 round_trippers.go:580]     Audit-Id: a1b0da8f-3642-402a-9874-e5732bd37aab
	I0131 02:39:21.639811 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:21.639816 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:21.639822 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:21.639827 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:21.639836 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:21 GMT
	I0131 02:39:21.640135 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:21.640454 1436700 pod_ready.go:97] node "multinode-263108" hosting pod "kube-proxy-x85lz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:21.640475 1436700 pod_ready.go:81] duration metric: took 399.511532ms waiting for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	E0131 02:39:21.640491 1436700 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-263108" hosting pod "kube-proxy-x85lz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:21.640497 1436700 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:21.837645 1436700 request.go:629] Waited for 197.025795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:39:21.837740 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:39:21.837752 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:21.837764 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:21.837776 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:21.840014 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:21.840040 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:21.840056 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:21.840068 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:21.840080 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:21 GMT
	I0131 02:39:21.840091 1436700 round_trippers.go:580]     Audit-Id: cb6dc36e-ca90-4e73-ad42-5165e5696a0a
	I0131 02:39:21.840102 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:21.840112 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:21.840292 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-263108","namespace":"kube-system","uid":"7cc8534f-0f2b-457e-9942-e49d0f507875","resourceVersion":"795","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.mirror":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.seen":"2024-01-31T02:28:18.078207038Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0131 02:39:22.037070 1436700 request.go:629] Waited for 196.362246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:22.037175 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:22.037188 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:22.037204 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:22.037215 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:22.039509 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:22.039529 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:22.039536 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:22 GMT
	I0131 02:39:22.039541 1436700 round_trippers.go:580]     Audit-Id: 40f8e366-1b91-477e-a392-7c2c2588b3c6
	I0131 02:39:22.039547 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:22.039552 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:22.039557 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:22.039562 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:22.039806 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:22.040242 1436700 pod_ready.go:97] node "multinode-263108" hosting pod "kube-scheduler-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:22.040267 1436700 pod_ready.go:81] duration metric: took 399.760995ms waiting for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	E0131 02:39:22.040277 1436700 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-263108" hosting pod "kube-scheduler-multinode-263108" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-263108" has status "Ready":"False"
	I0131 02:39:22.040296 1436700 pod_ready.go:38] duration metric: took 1.740398209s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:39:22.040315 1436700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 02:39:22.052087 1436700 command_runner.go:130] > -16
	I0131 02:39:22.052278 1436700 ops.go:34] apiserver oom_adj: -16
	I0131 02:39:22.052300 1436700 kubeadm.go:640] restartCluster took 22.11520408s
	I0131 02:39:22.052311 1436700 kubeadm.go:406] StartCluster complete in 22.159329798s
	I0131 02:39:22.052348 1436700 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:39:22.052445 1436700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:39:22.053220 1436700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:39:22.053468 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 02:39:22.053622 1436700 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 02:39:22.055678 1436700 out.go:177] * Enabled addons: 
	I0131 02:39:22.053821 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:39:22.053865 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:39:22.056934 1436700 addons.go:505] enable addons completed in 3.330987ms: enabled=[]
	I0131 02:39:22.057228 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:39:22.057599 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0131 02:39:22.057612 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:22.057619 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:22.057625 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:22.059853 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:22.059873 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:22.059884 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:22.059904 1436700 round_trippers.go:580]     Content-Length: 291
	I0131 02:39:22.059915 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:22 GMT
	I0131 02:39:22.059927 1436700 round_trippers.go:580]     Audit-Id: afbbebb6-b828-4edf-bf57-5c1f716099ee
	I0131 02:39:22.059936 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:22.059946 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:22.059953 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:22.060013 1436700 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2554d8bc-c0ad-485d-a9be-18a695e4434b","resourceVersion":"839","creationTimestamp":"2024-01-31T02:28:17Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0131 02:39:22.060224 1436700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-263108" context rescaled to 1 replicas
	I0131 02:39:22.060269 1436700 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 02:39:22.061814 1436700 out.go:177] * Verifying Kubernetes components...
	I0131 02:39:22.063164 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:39:22.150671 1436700 command_runner.go:130] > apiVersion: v1
	I0131 02:39:22.150697 1436700 command_runner.go:130] > data:
	I0131 02:39:22.150702 1436700 command_runner.go:130] >   Corefile: |
	I0131 02:39:22.150706 1436700 command_runner.go:130] >     .:53 {
	I0131 02:39:22.150709 1436700 command_runner.go:130] >         log
	I0131 02:39:22.150715 1436700 command_runner.go:130] >         errors
	I0131 02:39:22.150719 1436700 command_runner.go:130] >         health {
	I0131 02:39:22.150724 1436700 command_runner.go:130] >            lameduck 5s
	I0131 02:39:22.150727 1436700 command_runner.go:130] >         }
	I0131 02:39:22.150732 1436700 command_runner.go:130] >         ready
	I0131 02:39:22.150738 1436700 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0131 02:39:22.150742 1436700 command_runner.go:130] >            pods insecure
	I0131 02:39:22.150748 1436700 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0131 02:39:22.150752 1436700 command_runner.go:130] >            ttl 30
	I0131 02:39:22.150756 1436700 command_runner.go:130] >         }
	I0131 02:39:22.150760 1436700 command_runner.go:130] >         prometheus :9153
	I0131 02:39:22.150776 1436700 command_runner.go:130] >         hosts {
	I0131 02:39:22.150785 1436700 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0131 02:39:22.150789 1436700 command_runner.go:130] >            fallthrough
	I0131 02:39:22.150792 1436700 command_runner.go:130] >         }
	I0131 02:39:22.150797 1436700 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0131 02:39:22.150804 1436700 command_runner.go:130] >            max_concurrent 1000
	I0131 02:39:22.150808 1436700 command_runner.go:130] >         }
	I0131 02:39:22.150812 1436700 command_runner.go:130] >         cache 30
	I0131 02:39:22.150827 1436700 command_runner.go:130] >         loop
	I0131 02:39:22.150834 1436700 command_runner.go:130] >         reload
	I0131 02:39:22.150838 1436700 command_runner.go:130] >         loadbalance
	I0131 02:39:22.150844 1436700 command_runner.go:130] >     }
	I0131 02:39:22.150848 1436700 command_runner.go:130] > kind: ConfigMap
	I0131 02:39:22.150854 1436700 command_runner.go:130] > metadata:
	I0131 02:39:22.150859 1436700 command_runner.go:130] >   creationTimestamp: "2024-01-31T02:28:17Z"
	I0131 02:39:22.150866 1436700 command_runner.go:130] >   name: coredns
	I0131 02:39:22.150870 1436700 command_runner.go:130] >   namespace: kube-system
	I0131 02:39:22.150876 1436700 command_runner.go:130] >   resourceVersion: "401"
	I0131 02:39:22.150891 1436700 command_runner.go:130] >   uid: daf4a056-d58e-4e33-ae1c-801c3e65300a
	I0131 02:39:22.153181 1436700 node_ready.go:35] waiting up to 6m0s for node "multinode-263108" to be "Ready" ...
	I0131 02:39:22.153207 1436700 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0131 02:39:22.237582 1436700 request.go:629] Waited for 84.256481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:22.237662 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:22.237671 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:22.237682 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:22.237693 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:22.240583 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:22.240610 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:22.240620 1436700 round_trippers.go:580]     Audit-Id: fa9e307a-c4c6-4c77-aaa7-1d89569d7eef
	I0131 02:39:22.240629 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:22.240636 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:22.240645 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:22.240665 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:22.240674 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:22 GMT
	I0131 02:39:22.240836 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"786","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0131 02:39:22.654405 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:22.654431 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:22.654442 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:22.654448 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:22.657799 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:22.657825 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:22.657832 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:22.657838 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:22 GMT
	I0131 02:39:22.657843 1436700 round_trippers.go:580]     Audit-Id: 56f06f2d-68cb-4700-94af-148f63543fd8
	I0131 02:39:22.657848 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:22.657853 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:22.657858 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:22.658114 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:22.658448 1436700 node_ready.go:49] node "multinode-263108" has status "Ready":"True"
	I0131 02:39:22.658463 1436700 node_ready.go:38] duration metric: took 505.252385ms waiting for node "multinode-263108" to be "Ready" ...
	I0131 02:39:22.658473 1436700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:39:22.658553 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:22.658562 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:22.658569 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:22.658575 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:22.661988 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:22.662012 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:22.662021 1436700 round_trippers.go:580]     Audit-Id: e9a6e16a-31b3-41bb-92a4-e783e23a2237
	I0131 02:39:22.662029 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:22.662037 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:22.662045 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:22.662053 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:22.662061 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:22 GMT
	I0131 02:39:22.663209 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82996 chars]
	I0131 02:39:22.667103 1436700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:22.837576 1436700 request.go:629] Waited for 170.344396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:22.837650 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:22.837655 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:22.837663 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:22.837669 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:22.840547 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:22.840574 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:22.840585 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:22.840594 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:22 GMT
	I0131 02:39:22.840601 1436700 round_trippers.go:580]     Audit-Id: 449f59d0-3059-4957-bc44-e22d1b6a77b0
	I0131 02:39:22.840607 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:22.840612 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:22.840617 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:22.840746 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:23.037697 1436700 request.go:629] Waited for 196.36454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:23.037782 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:23.037788 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:23.037796 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:23.037803 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:23.040300 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:23.040321 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:23.040328 1436700 round_trippers.go:580]     Audit-Id: 5a5c2572-06e3-487c-ad7c-d8e3fa32d917
	I0131 02:39:23.040333 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:23.040338 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:23.040346 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:23.040355 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:23.040364 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:23 GMT
	I0131 02:39:23.040557 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:23.237140 1436700 request.go:629] Waited for 69.262596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:23.237251 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:23.237263 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:23.237275 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:23.237298 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:23.239820 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:23.239841 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:23.239853 1436700 round_trippers.go:580]     Audit-Id: 859d7339-f8a7-4a8a-a969-bbd9fa178eeb
	I0131 02:39:23.239859 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:23.239864 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:23.239869 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:23.239874 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:23.239879 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:23 GMT
	I0131 02:39:23.240035 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:23.436908 1436700 request.go:629] Waited for 196.298302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:23.436974 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:23.436979 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:23.436988 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:23.436994 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:23.439777 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:23.439802 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:23.439814 1436700 round_trippers.go:580]     Audit-Id: 090ca822-8eb9-4e73-bf9c-235a11ce901d
	I0131 02:39:23.439824 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:23.439835 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:23.439840 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:23.439846 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:23.439851 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:23 GMT
	I0131 02:39:23.440231 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:23.667760 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:23.667794 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:23.667807 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:23.667817 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:23.671094 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:23.671120 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:23.671130 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:23.671140 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:23.671149 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:23 GMT
	I0131 02:39:23.671155 1436700 round_trippers.go:580]     Audit-Id: 7008ce33-1ca5-4226-95a4-0419b7ab752f
	I0131 02:39:23.671160 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:23.671165 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:23.671525 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:23.837486 1436700 request.go:629] Waited for 165.35292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:23.837568 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:23.837576 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:23.837589 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:23.837603 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:23.840288 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:23.840313 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:23.840324 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:23.840333 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:23 GMT
	I0131 02:39:23.840342 1436700 round_trippers.go:580]     Audit-Id: 7ef039ae-8353-4564-8487-c7a2d58a48d1
	I0131 02:39:23.840349 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:23.840354 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:23.840360 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:23.840568 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:24.167676 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:24.167703 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:24.167712 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:24.167718 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:24.170521 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:24.170545 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:24.170553 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:24.170558 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:24.170563 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:24.170569 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:24 GMT
	I0131 02:39:24.170574 1436700 round_trippers.go:580]     Audit-Id: ec739100-7e46-4e11-ba5f-d3583161a7c5
	I0131 02:39:24.170579 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:24.170902 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:24.237723 1436700 request.go:629] Waited for 66.225664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:24.237784 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:24.237807 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:24.237818 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:24.237824 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:24.240387 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:24.240413 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:24.240423 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:24.240432 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:24 GMT
	I0131 02:39:24.240440 1436700 round_trippers.go:580]     Audit-Id: 771b848a-329b-4d1a-bf24-fd1288401d72
	I0131 02:39:24.240449 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:24.240456 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:24.240464 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:24.240649 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:24.668437 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:24.668466 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:24.668475 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:24.668481 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:24.671750 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:24.671779 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:24.671802 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:24 GMT
	I0131 02:39:24.671810 1436700 round_trippers.go:580]     Audit-Id: 55e53d8c-3a59-4812-9a2b-c9ce93a73ca9
	I0131 02:39:24.671815 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:24.671820 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:24.671825 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:24.671830 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:24.672118 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:24.672696 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:24.672718 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:24.672735 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:24.672742 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:24.675216 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:24.675232 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:24.675240 1436700 round_trippers.go:580]     Audit-Id: f03212d1-a066-429a-aecc-2f109d6be3f7
	I0131 02:39:24.675248 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:24.675256 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:24.675274 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:24.675284 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:24.675296 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:24 GMT
	I0131 02:39:24.675546 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:24.675884 1436700 pod_ready.go:102] pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace has status "Ready":"False"
	I0131 02:39:25.168304 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:25.168342 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:25.168356 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:25.168364 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:25.171263 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:25.171285 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:25.171292 1436700 round_trippers.go:580]     Audit-Id: 6d7a2e95-085e-4eb4-b1eb-a4651b39df80
	I0131 02:39:25.171298 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:25.171303 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:25.171308 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:25.171315 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:25.171323 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:25 GMT
	I0131 02:39:25.171563 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:25.172062 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:25.172077 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:25.172085 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:25.172090 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:25.174092 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:25.174111 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:25.174118 1436700 round_trippers.go:580]     Audit-Id: 1efa0305-67ce-497b-b08d-0322444f01a6
	I0131 02:39:25.174124 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:25.174131 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:25.174136 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:25.174141 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:25.174147 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:25 GMT
	I0131 02:39:25.174267 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:25.668003 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:25.668029 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:25.668038 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:25.668044 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:25.673927 1436700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0131 02:39:25.673950 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:25.673960 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:25.673969 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:25 GMT
	I0131 02:39:25.673976 1436700 round_trippers.go:580]     Audit-Id: 2cc23e78-494e-4c4b-8f60-7dc86c86f2a5
	I0131 02:39:25.673986 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:25.674003 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:25.674012 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:25.674445 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:25.674932 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:25.674947 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:25.674958 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:25.674964 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:25.677821 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:25.677837 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:25.677846 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:25.677854 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:25.677862 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:25 GMT
	I0131 02:39:25.677875 1436700 round_trippers.go:580]     Audit-Id: 7e92f511-a775-4524-9b6f-cf7400ea9498
	I0131 02:39:25.677886 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:25.677898 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:25.678085 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:26.168092 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:26.168128 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:26.168140 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:26.168154 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:26.173577 1436700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0131 02:39:26.173600 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:26.173607 1436700 round_trippers.go:580]     Audit-Id: fb92a0b2-bb2f-491f-8e41-18e9473a7982
	I0131 02:39:26.173613 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:26.173618 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:26.173623 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:26.173628 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:26.173633 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:26 GMT
	I0131 02:39:26.173825 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:26.174289 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:26.174301 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:26.174309 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:26.174315 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:26.178623 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:39:26.178657 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:26.178666 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:26 GMT
	I0131 02:39:26.178674 1436700 round_trippers.go:580]     Audit-Id: 161ba32d-2d84-4274-bb7a-14ee0f293111
	I0131 02:39:26.178682 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:26.178690 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:26.178699 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:26.178708 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:26.179056 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:26.667722 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:26.667751 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:26.667760 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:26.667766 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:26.673336 1436700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0131 02:39:26.673367 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:26.673378 1436700 round_trippers.go:580]     Audit-Id: 749c7e68-d65b-4bd4-aeff-00ccdfee48dc
	I0131 02:39:26.673387 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:26.673395 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:26.673407 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:26.673415 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:26.673422 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:26 GMT
	I0131 02:39:26.673824 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"793","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0131 02:39:26.674323 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:26.674339 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:26.674346 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:26.674352 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:26.677137 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:26.677162 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:26.677173 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:26.677181 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:26 GMT
	I0131 02:39:26.677190 1436700 round_trippers.go:580]     Audit-Id: 87030a1d-a052-4af3-905b-b071604bc2c0
	I0131 02:39:26.677198 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:26.677206 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:26.677215 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:26.677408 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:26.677714 1436700 pod_ready.go:102] pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace has status "Ready":"False"
	I0131 02:39:27.167773 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:39:27.167798 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:27.167806 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:27.167812 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:27.170328 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:27.170348 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:27.170355 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:27.170360 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:27.170366 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:27.170371 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:27.170375 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:27 GMT
	I0131 02:39:27.170380 1436700 round_trippers.go:580]     Audit-Id: 8e715f13-b150-4491-8862-12a4ddc70180
	I0131 02:39:27.170564 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0131 02:39:27.171085 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:27.171103 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:27.171110 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:27.171116 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:27.173349 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:27.173372 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:27.173381 1436700 round_trippers.go:580]     Audit-Id: 29276a8a-d399-4ef9-9cd7-1b11110f82a6
	I0131 02:39:27.173390 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:27.173398 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:27.173406 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:27.173413 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:27.173423 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:27 GMT
	I0131 02:39:27.173956 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:27.174296 1436700 pod_ready.go:92] pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:27.174314 1436700 pod_ready.go:81] duration metric: took 4.507185469s waiting for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:27.174326 1436700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:27.174378 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:27.174386 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:27.174393 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:27.174398 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:27.176609 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:27.176625 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:27.176631 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:27.176636 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:27.176641 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:27.176646 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:27.176651 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:27 GMT
	I0131 02:39:27.176656 1436700 round_trippers.go:580]     Audit-Id: 162247ae-1dec-474e-9503-935bf4f841b9
	I0131 02:39:27.176841 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:27.177328 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:27.177349 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:27.177366 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:27.177375 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:27.179209 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:27.179223 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:27.179233 1436700 round_trippers.go:580]     Audit-Id: dfbe891e-4265-4b21-a441-7720b3fcd69c
	I0131 02:39:27.179238 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:27.179243 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:27.179248 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:27.179253 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:27.179258 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:27 GMT
	I0131 02:39:27.179426 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:27.674926 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:27.674951 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:27.674960 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:27.674966 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:27.686994 1436700 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0131 02:39:27.687030 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:27.687041 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:27.687050 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:27.687058 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:27.687066 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:27.687074 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:27 GMT
	I0131 02:39:27.687087 1436700 round_trippers.go:580]     Audit-Id: f66a3f74-29b9-46cf-9e73-5d216e33c11d
	I0131 02:39:27.687268 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:27.687792 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:27.687808 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:27.687816 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:27.687822 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:27.690736 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:27.690762 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:27.690772 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:27.690780 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:27 GMT
	I0131 02:39:27.690786 1436700 round_trippers.go:580]     Audit-Id: de635020-d87e-446d-9d50-73b8769d9ab0
	I0131 02:39:27.690791 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:27.690796 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:27.690801 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:27.690942 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:28.174610 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:28.174651 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:28.174663 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:28.174684 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:28.177337 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:28.177357 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:28.177368 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:28.177376 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:28 GMT
	I0131 02:39:28.177385 1436700 round_trippers.go:580]     Audit-Id: 8e8de55d-9765-474b-b973-bd24d971dae8
	I0131 02:39:28.177396 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:28.177408 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:28.177420 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:28.177623 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:28.178065 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:28.178079 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:28.178087 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:28.178093 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:28.180816 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:28.180843 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:28.180853 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:28.180862 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:28 GMT
	I0131 02:39:28.180878 1436700 round_trippers.go:580]     Audit-Id: e0e0800e-4f95-4851-8827-df36418e9f46
	I0131 02:39:28.180887 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:28.180895 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:28.180903 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:28.181355 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:28.675006 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:28.675033 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:28.675041 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:28.675047 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:28.678871 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:28.678895 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:28.678903 1436700 round_trippers.go:580]     Audit-Id: 474235d3-ab64-4ddb-bdcc-18c61d6325b0
	I0131 02:39:28.678909 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:28.678914 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:28.678918 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:28.678923 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:28.678928 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:28 GMT
	I0131 02:39:28.679891 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:28.680497 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:28.680519 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:28.680531 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:28.680544 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:28.686838 1436700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0131 02:39:28.686853 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:28.686859 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:28.686864 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:28 GMT
	I0131 02:39:28.686869 1436700 round_trippers.go:580]     Audit-Id: 089858c9-bd36-4170-9391-e1076347fcbc
	I0131 02:39:28.686874 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:28.686879 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:28.686884 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:28.687104 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:29.175267 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:29.175299 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:29.175311 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:29.175321 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:29.178144 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:29.178164 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:29.178172 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:29.178177 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:29 GMT
	I0131 02:39:29.178185 1436700 round_trippers.go:580]     Audit-Id: c2ec4adf-cae6-4d5f-871d-20faccd983c6
	I0131 02:39:29.178191 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:29.178196 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:29.178205 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:29.178466 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:29.178961 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:29.178977 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:29.178985 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:29.178991 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:29.181017 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:29.181033 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:29.181039 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:29.181044 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:29.181052 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:29 GMT
	I0131 02:39:29.181058 1436700 round_trippers.go:580]     Audit-Id: 2fcab8c9-a1f9-423c-baf4-43826c54983f
	I0131 02:39:29.181063 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:29.181071 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:29.181339 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:29.181761 1436700 pod_ready.go:102] pod "etcd-multinode-263108" in "kube-system" namespace has status "Ready":"False"
	I0131 02:39:29.674989 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:29.675013 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:29.675034 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:29.675040 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:29.677953 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:29.677980 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:29.677990 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:29.677998 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:29.678006 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:29.678018 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:29.678026 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:29 GMT
	I0131 02:39:29.678035 1436700 round_trippers.go:580]     Audit-Id: afff8e50-fe85-4917-bb51-22c09373bab3
	I0131 02:39:29.678257 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:29.678847 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:29.678864 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:29.678872 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:29.678878 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:29.681768 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:29.681785 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:29.681791 1436700 round_trippers.go:580]     Audit-Id: 90397d4e-7a94-4ba6-9656-1d4481ad6c6a
	I0131 02:39:29.681804 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:29.681812 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:29.681821 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:29.681828 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:29.681836 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:29 GMT
	I0131 02:39:29.682031 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:30.174786 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:30.174814 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:30.174822 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:30.174828 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:30.177810 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:30.177841 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:30.177852 1436700 round_trippers.go:580]     Audit-Id: 66294d38-c557-4239-8a93-fadc782a22d4
	I0131 02:39:30.177861 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:30.177869 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:30.177880 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:30.177891 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:30.177902 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:30 GMT
	I0131 02:39:30.178139 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:30.178718 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:30.178735 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:30.178744 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:30.178760 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:30.181065 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:30.181086 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:30.181095 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:30.181104 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:30.181111 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:30.181118 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:30.181126 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:30 GMT
	I0131 02:39:30.181137 1436700 round_trippers.go:580]     Audit-Id: 688767ab-117a-4f10-b5d4-0ffff533312d
	I0131 02:39:30.181327 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:30.674674 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:30.674723 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:30.674731 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:30.674737 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:30.678253 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:30.678279 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:30.678289 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:30.678301 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:30 GMT
	I0131 02:39:30.678309 1436700 round_trippers.go:580]     Audit-Id: e108f7d9-0cc6-4c9f-9751-4c128a9fee8c
	I0131 02:39:30.678316 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:30.678328 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:30.678336 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:30.678547 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:30.678961 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:30.678973 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:30.678980 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:30.678986 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:30.681675 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:30.681713 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:30.681727 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:30.681736 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:30 GMT
	I0131 02:39:30.681752 1436700 round_trippers.go:580]     Audit-Id: 900d50c2-3539-4f74-b8e4-7c580b4d0d26
	I0131 02:39:30.681759 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:30.681767 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:30.681777 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:30.682211 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:31.175124 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:31.175157 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:31.175183 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:31.175193 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:31.178641 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:31.178672 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:31.178683 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:31.178692 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:31.178701 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:31.178710 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:31 GMT
	I0131 02:39:31.178720 1436700 round_trippers.go:580]     Audit-Id: 30f85ca6-fca9-4d4e-ac57-56e223335d57
	I0131 02:39:31.178729 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:31.179339 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:31.179781 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:31.179795 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:31.179803 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:31.179809 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:31.182119 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:31.182140 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:31.182149 1436700 round_trippers.go:580]     Audit-Id: 30ac4c03-ef5c-4592-a5cb-170c46940ecb
	I0131 02:39:31.182177 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:31.182188 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:31.182200 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:31.182209 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:31.182220 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:31 GMT
	I0131 02:39:31.182408 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:31.182849 1436700 pod_ready.go:102] pod "etcd-multinode-263108" in "kube-system" namespace has status "Ready":"False"
	I0131 02:39:31.674844 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:31.674870 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:31.674879 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:31.674885 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:31.677613 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:31.677635 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:31.677642 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:31.677648 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:31 GMT
	I0131 02:39:31.677653 1436700 round_trippers.go:580]     Audit-Id: 3e245294-5b9b-4d73-9c03-b99d66259c68
	I0131 02:39:31.677658 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:31.677669 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:31.677686 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:31.677896 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:31.678310 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:31.678323 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:31.678330 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:31.678336 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:31.680681 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:31.680699 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:31.680705 1436700 round_trippers.go:580]     Audit-Id: 1a005453-c40e-42e2-ae69-6d4baeff5019
	I0131 02:39:31.680710 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:31.680715 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:31.680720 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:31.680725 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:31.680732 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:31 GMT
	I0131 02:39:31.680895 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:32.174590 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:32.174618 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:32.174627 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:32.174637 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:32.177329 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:32.177349 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:32.177356 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:32 GMT
	I0131 02:39:32.177361 1436700 round_trippers.go:580]     Audit-Id: 5bbde8da-4c84-44e4-8712-5bcf545fc165
	I0131 02:39:32.177372 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:32.177380 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:32.177387 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:32.177401 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:32.177606 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:32.178018 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:32.178036 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:32.178044 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:32.178053 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:32.180290 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:32.180315 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:32.180324 1436700 round_trippers.go:580]     Audit-Id: 52a1171d-2b10-461d-867e-8127269e80eb
	I0131 02:39:32.180332 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:32.180339 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:32.180347 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:32.180355 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:32.180364 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:32 GMT
	I0131 02:39:32.180539 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:32.675234 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:32.675261 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:32.675269 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:32.675275 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:32.678355 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:32.678381 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:32.678391 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:32.678399 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:32.678407 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:32.678413 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:32 GMT
	I0131 02:39:32.678420 1436700 round_trippers.go:580]     Audit-Id: f0871243-a0f4-45b5-8412-83a0e9d85e43
	I0131 02:39:32.678427 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:32.678885 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:32.679428 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:32.679444 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:32.679454 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:32.679465 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:32.681974 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:32.681991 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:32.681998 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:32.682003 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:32 GMT
	I0131 02:39:32.682008 1436700 round_trippers.go:580]     Audit-Id: adfcdb41-c455-4e6e-95f0-6068a55d9d97
	I0131 02:39:32.682013 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:32.682018 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:32.682023 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:32.682190 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:33.174686 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:33.174721 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.174734 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.174743 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.177428 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.177446 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.177453 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.177458 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.177466 1436700 round_trippers.go:580]     Audit-Id: 231f827d-65e9-4b00-82bd-44e80e675645
	I0131 02:39:33.177471 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.177476 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.177484 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.177644 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"791","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0131 02:39:33.178165 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:33.178182 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.178193 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.178212 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.180919 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.180938 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.180946 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.180951 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.180965 1436700 round_trippers.go:580]     Audit-Id: c17bd184-841b-4531-a9e4-c6a04b510690
	I0131 02:39:33.180971 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.180976 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.180981 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.181564 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:33.675341 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:39:33.675373 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.675386 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.675394 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.679435 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:39:33.679462 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.679473 1436700 round_trippers.go:580]     Audit-Id: d14bd20c-cf9f-4ff0-90dd-fed84ddec353
	I0131 02:39:33.679482 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.679496 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.679516 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.679527 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.679533 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.680331 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"940","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0131 02:39:33.680727 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:33.680739 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.680746 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.680751 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.683012 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.683032 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.683039 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.683050 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.683055 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.683060 1436700 round_trippers.go:580]     Audit-Id: d05a96cf-2b90-47df-9344-9a03e6dad2ec
	I0131 02:39:33.683068 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.683073 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.684240 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:33.684533 1436700 pod_ready.go:92] pod "etcd-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:33.684553 1436700 pod_ready.go:81] duration metric: took 6.510216838s waiting for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.684569 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.684629 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-263108
	I0131 02:39:33.684636 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.684643 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.684648 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.687426 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.687452 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.687458 1436700 round_trippers.go:580]     Audit-Id: 84bdf9c3-29cf-430d-937d-8fdb9a8816b5
	I0131 02:39:33.687463 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.687471 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.687476 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.687487 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.687492 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.687647 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-263108","namespace":"kube-system","uid":"0c527200-696b-4681-af91-226016437113","resourceVersion":"910","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.109:8443","kubernetes.io/config.hash":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.mirror":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.seen":"2024-01-31T02:28:18.078204875Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0131 02:39:33.688004 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:33.688015 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.688022 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.688030 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.690201 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.690219 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.690229 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.690235 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.690248 1436700 round_trippers.go:580]     Audit-Id: 4e6e05e5-3c35-4266-9585-9b6a43e88773
	I0131 02:39:33.690255 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.690264 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.690275 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.690865 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:33.691145 1436700 pod_ready.go:92] pod "kube-apiserver-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:33.691159 1436700 pod_ready.go:81] duration metric: took 6.579646ms waiting for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.691167 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.691228 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-263108
	I0131 02:39:33.691236 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.691243 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.691254 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.693669 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.693691 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.693704 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.693713 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.693720 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.693728 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.693737 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.693746 1436700 round_trippers.go:580]     Audit-Id: 7dfa1f58-2acf-430c-800a-976f87000f0b
	I0131 02:39:33.694189 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-263108","namespace":"kube-system","uid":"056ea293-6261-4e6c-9b3f-9fdc7d0727a2","resourceVersion":"914","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.mirror":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.seen":"2024-01-31T02:28:18.078205997Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0131 02:39:33.694578 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:33.694591 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.694597 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.694603 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.696341 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:33.696360 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.696369 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.696378 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.696392 1436700 round_trippers.go:580]     Audit-Id: ccfe4ca0-72f9-45ec-816e-32c23b76d4d7
	I0131 02:39:33.696399 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.696414 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.696426 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.696577 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:33.696839 1436700 pod_ready.go:92] pod "kube-controller-manager-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:33.696852 1436700 pod_ready.go:81] duration metric: took 5.678606ms waiting for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.696860 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.696907 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:39:33.696914 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.696921 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.696926 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.699283 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.699300 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.699306 1436700 round_trippers.go:580]     Audit-Id: bbda07fe-a2b9-4494-80eb-eba513bed951
	I0131 02:39:33.699311 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.699316 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.699322 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.699328 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.699335 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.699827 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mpxjh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3a11b226-7a8e-4b25-a409-acc439d4bdfb","resourceVersion":"759","creationTimestamp":"2024-01-31T02:30:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0131 02:39:33.700152 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:39:33.700163 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.700169 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.700175 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.701989 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:39:33.702005 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.702011 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.702016 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.702021 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.702026 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.702032 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.702037 1436700 round_trippers.go:580]     Audit-Id: a81dffe9-374a-4776-92d3-1ddeee270d83
	I0131 02:39:33.702141 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"5d8d8dfa-72be-4459-b7bc-217aef0cc608","resourceVersion":"785","creationTimestamp":"2024-01-31T02:31:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_31_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:31:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3964 chars]
	I0131 02:39:33.702367 1436700 pod_ready.go:92] pod "kube-proxy-mpxjh" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:33.702381 1436700 pod_ready.go:81] duration metric: took 5.516134ms waiting for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.702392 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.702442 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:39:33.702449 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.702456 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.702462 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.705185 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.705212 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.705219 1436700 round_trippers.go:580]     Audit-Id: 119aeefc-e984-4e50-b735-6fbae61427f8
	I0131 02:39:33.705224 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.705229 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.705234 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.705239 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.705244 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.705365 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x5jb7","generateName":"kube-proxy-","namespace":"kube-system","uid":"4dc3dae9-7781-4832-88ba-08a17ecfe557","resourceVersion":"554","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0131 02:39:33.837039 1436700 request.go:629] Waited for 131.308076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:39:33.837104 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:39:33.837109 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:33.837117 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:33.837125 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:33.839700 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:33.839718 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:33.839725 1436700 round_trippers.go:580]     Audit-Id: 6c7ef79d-d797-44d0-93c4-f8f317a765bd
	I0131 02:39:33.839731 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:33.839747 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:33.839757 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:33.839766 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:33.839774 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:33 GMT
	I0131 02:39:33.839903 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m02","uid":"33ce8eca-eb98-4b22-953c-97e57c604ffc","resourceVersion":"782","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_31_25_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0131 02:39:33.840191 1436700 pod_ready.go:92] pod "kube-proxy-x5jb7" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:33.840206 1436700 pod_ready.go:81] duration metric: took 137.80546ms waiting for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:33.840220 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:34.037705 1436700 request.go:629] Waited for 197.410503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:39:34.037789 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:39:34.037795 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:34.037803 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:34.037810 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:34.040612 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:34.040640 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:34.040650 1436700 round_trippers.go:580]     Audit-Id: f61d8c00-ddd2-4893-9d78-e500c59dfc61
	I0131 02:39:34.040659 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:34.040666 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:34.040674 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:34.040681 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:34.040689 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:34 GMT
	I0131 02:39:34.040899 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x85lz","generateName":"kube-proxy-","namespace":"kube-system","uid":"36e014b9-154e-43f4-b694-7f05bd31baef","resourceVersion":"837","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0131 02:39:34.237496 1436700 request.go:629] Waited for 196.11915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:34.237562 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:34.237567 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:34.237575 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:34.237580 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:34.240144 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:34.240168 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:34.240179 1436700 round_trippers.go:580]     Audit-Id: 8e82da9a-2e65-4732-b33e-310f5ba38a1e
	I0131 02:39:34.240186 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:34.240193 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:34.240205 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:34.240214 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:34.240234 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:34 GMT
	I0131 02:39:34.240379 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:34.240721 1436700 pod_ready.go:92] pod "kube-proxy-x85lz" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:34.240740 1436700 pod_ready.go:81] duration metric: took 400.51199ms waiting for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:34.240760 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:34.437754 1436700 request.go:629] Waited for 196.911897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:39:34.437848 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:39:34.437854 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:34.437862 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:34.437869 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:34.440540 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:34.440562 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:34.440573 1436700 round_trippers.go:580]     Audit-Id: 60202ba5-089c-476d-b6d2-2d0bb66d12fc
	I0131 02:39:34.440580 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:34.440587 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:34.440595 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:34.440607 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:34.440617 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:34 GMT
	I0131 02:39:34.440812 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-263108","namespace":"kube-system","uid":"7cc8534f-0f2b-457e-9942-e49d0f507875","resourceVersion":"941","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.mirror":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.seen":"2024-01-31T02:28:18.078207038Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0131 02:39:34.637598 1436700 request.go:629] Waited for 196.371054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:34.637676 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:39:34.637684 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:34.637696 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:34.637709 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:34.640805 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:34.640833 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:34.640842 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:34 GMT
	I0131 02:39:34.640851 1436700 round_trippers.go:580]     Audit-Id: 2a38893e-3042-4c2e-8a83-a07546e81d75
	I0131 02:39:34.640859 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:34.640868 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:34.640877 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:34.640886 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:34.641296 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0131 02:39:34.641746 1436700 pod_ready.go:92] pod "kube-scheduler-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:39:34.641774 1436700 pod_ready.go:81] duration metric: took 401.003751ms waiting for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:39:34.641786 1436700 pod_ready.go:38] duration metric: took 11.98328762s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:39:34.641811 1436700 api_server.go:52] waiting for apiserver process to appear ...
	I0131 02:39:34.641870 1436700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:39:34.656401 1436700 command_runner.go:130] > 1089
	I0131 02:39:34.656457 1436700 api_server.go:72] duration metric: took 12.596155358s to wait for apiserver process to appear ...
	I0131 02:39:34.656469 1436700 api_server.go:88] waiting for apiserver healthz status ...
	I0131 02:39:34.656495 1436700 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:39:34.662379 1436700 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0131 02:39:34.662456 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/version
	I0131 02:39:34.662468 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:34.662477 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:34.662510 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:34.663418 1436700 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0131 02:39:34.663436 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:34.663446 1436700 round_trippers.go:580]     Content-Length: 264
	I0131 02:39:34.663452 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:34 GMT
	I0131 02:39:34.663458 1436700 round_trippers.go:580]     Audit-Id: 87e41101-26d9-46d9-b32e-f88e97f3495d
	I0131 02:39:34.663466 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:34.663472 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:34.663479 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:34.663484 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:34.663504 1436700 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0131 02:39:34.663557 1436700 api_server.go:141] control plane version: v1.28.4
	I0131 02:39:34.663572 1436700 api_server.go:131] duration metric: took 7.095885ms to wait for apiserver health ...
	I0131 02:39:34.663581 1436700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 02:39:34.836956 1436700 request.go:629] Waited for 173.280692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:34.837034 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:34.837040 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:34.837050 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:34.837061 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:34.841707 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:39:34.841730 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:34.841737 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:34.841743 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:34.841748 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:34.841753 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:34 GMT
	I0131 02:39:34.841758 1436700 round_trippers.go:580]     Audit-Id: a429d6aa-4223-407d-90e2-bf5ccd203a93
	I0131 02:39:34.841763 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:34.842908 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"941"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82074 chars]
	I0131 02:39:34.845306 1436700 system_pods.go:59] 12 kube-system pods found
	I0131 02:39:34.845327 1436700 system_pods.go:61] "coredns-5dd5756b68-skqw4" [713e1df7-54be-4322-986d-b6d7db88c1c7] Running
	I0131 02:39:34.845332 1436700 system_pods.go:61] "etcd-multinode-263108" [cf8c4ba5-fce9-4570-a204-0b713281fc21] Running
	I0131 02:39:34.845340 1436700 system_pods.go:61] "kindnet-88m7n" [afe9a549-0baf-4f87-8582-7cd758b8192d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0131 02:39:34.845348 1436700 system_pods.go:61] "kindnet-knvl8" [8e734b81-4d44-4c96-8439-0ef800021bf8] Running
	I0131 02:39:34.845355 1436700 system_pods.go:61] "kindnet-zvrh5" [2b89787d-5c3c-48e6-aecc-441c99cd1017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0131 02:39:34.845363 1436700 system_pods.go:61] "kube-apiserver-multinode-263108" [0c527200-696b-4681-af91-226016437113] Running
	I0131 02:39:34.845371 1436700 system_pods.go:61] "kube-controller-manager-multinode-263108" [056ea293-6261-4e6c-9b3f-9fdc7d0727a2] Running
	I0131 02:39:34.845377 1436700 system_pods.go:61] "kube-proxy-mpxjh" [3a11b226-7a8e-4b25-a409-acc439d4bdfb] Running
	I0131 02:39:34.845381 1436700 system_pods.go:61] "kube-proxy-x5jb7" [4dc3dae9-7781-4832-88ba-08a17ecfe557] Running
	I0131 02:39:34.845387 1436700 system_pods.go:61] "kube-proxy-x85lz" [36e014b9-154e-43f4-b694-7f05bd31baef] Running
	I0131 02:39:34.845392 1436700 system_pods.go:61] "kube-scheduler-multinode-263108" [7cc8534f-0f2b-457e-9942-e49d0f507875] Running
	I0131 02:39:34.845397 1436700 system_pods.go:61] "storage-provisioner" [eaba2b6b-2a00-4af9-bdb8-67d110b3eb19] Running
	I0131 02:39:34.845404 1436700 system_pods.go:74] duration metric: took 181.81544ms to wait for pod list to return data ...
	I0131 02:39:34.845417 1436700 default_sa.go:34] waiting for default service account to be created ...
	I0131 02:39:35.036801 1436700 request.go:629] Waited for 191.298544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0131 02:39:35.036922 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/default/serviceaccounts
	I0131 02:39:35.036934 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:35.036946 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:35.036957 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:35.039770 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:39:35.039792 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:35.039799 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:35.039804 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:35.039809 1436700 round_trippers.go:580]     Content-Length: 261
	I0131 02:39:35.039814 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:35 GMT
	I0131 02:39:35.039819 1436700 round_trippers.go:580]     Audit-Id: ed803123-eccf-473e-b58f-5599f1d220d3
	I0131 02:39:35.039824 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:35.039829 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:35.039850 1436700 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"941"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a547548a-2324-44cd-af81-a2207755f763","resourceVersion":"369","creationTimestamp":"2024-01-31T02:28:30Z"}}]}
	I0131 02:39:35.040084 1436700 default_sa.go:45] found service account: "default"
	I0131 02:39:35.040102 1436700 default_sa.go:55] duration metric: took 194.676753ms for default service account to be created ...
	I0131 02:39:35.040114 1436700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 02:39:35.237561 1436700 request.go:629] Waited for 197.375054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:35.237636 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:39:35.237643 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:35.237656 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:35.237686 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:35.241819 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:39:35.241843 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:35.241853 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:35.241861 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:35.241883 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:35 GMT
	I0131 02:39:35.241899 1436700 round_trippers.go:580]     Audit-Id: 76633091-c9f8-4274-ab62-73527eddc039
	I0131 02:39:35.241908 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:35.241921 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:35.243542 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"941"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82074 chars]
	I0131 02:39:35.246362 1436700 system_pods.go:86] 12 kube-system pods found
	I0131 02:39:35.246392 1436700 system_pods.go:89] "coredns-5dd5756b68-skqw4" [713e1df7-54be-4322-986d-b6d7db88c1c7] Running
	I0131 02:39:35.246401 1436700 system_pods.go:89] "etcd-multinode-263108" [cf8c4ba5-fce9-4570-a204-0b713281fc21] Running
	I0131 02:39:35.246413 1436700 system_pods.go:89] "kindnet-88m7n" [afe9a549-0baf-4f87-8582-7cd758b8192d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0131 02:39:35.246425 1436700 system_pods.go:89] "kindnet-knvl8" [8e734b81-4d44-4c96-8439-0ef800021bf8] Running
	I0131 02:39:35.246442 1436700 system_pods.go:89] "kindnet-zvrh5" [2b89787d-5c3c-48e6-aecc-441c99cd1017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0131 02:39:35.246454 1436700 system_pods.go:89] "kube-apiserver-multinode-263108" [0c527200-696b-4681-af91-226016437113] Running
	I0131 02:39:35.246462 1436700 system_pods.go:89] "kube-controller-manager-multinode-263108" [056ea293-6261-4e6c-9b3f-9fdc7d0727a2] Running
	I0131 02:39:35.246473 1436700 system_pods.go:89] "kube-proxy-mpxjh" [3a11b226-7a8e-4b25-a409-acc439d4bdfb] Running
	I0131 02:39:35.246499 1436700 system_pods.go:89] "kube-proxy-x5jb7" [4dc3dae9-7781-4832-88ba-08a17ecfe557] Running
	I0131 02:39:35.246509 1436700 system_pods.go:89] "kube-proxy-x85lz" [36e014b9-154e-43f4-b694-7f05bd31baef] Running
	I0131 02:39:35.246516 1436700 system_pods.go:89] "kube-scheduler-multinode-263108" [7cc8534f-0f2b-457e-9942-e49d0f507875] Running
	I0131 02:39:35.246525 1436700 system_pods.go:89] "storage-provisioner" [eaba2b6b-2a00-4af9-bdb8-67d110b3eb19] Running
	I0131 02:39:35.246535 1436700 system_pods.go:126] duration metric: took 206.409112ms to wait for k8s-apps to be running ...
	I0131 02:39:35.246550 1436700 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 02:39:35.246617 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:39:35.259795 1436700 system_svc.go:56] duration metric: took 13.237093ms WaitForService to wait for kubelet.
	I0131 02:39:35.259819 1436700 kubeadm.go:581] duration metric: took 13.199521209s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 02:39:35.259840 1436700 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:39:35.437263 1436700 request.go:629] Waited for 177.349089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0131 02:39:35.437373 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0131 02:39:35.437385 1436700 round_trippers.go:469] Request Headers:
	I0131 02:39:35.437399 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:39:35.437413 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:39:35.440476 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:39:35.440505 1436700 round_trippers.go:577] Response Headers:
	I0131 02:39:35.440516 1436700 round_trippers.go:580]     Audit-Id: 00fb5820-8b12-4565-bdd6-2e4438282122
	I0131 02:39:35.440526 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:39:35.440553 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:39:35.440561 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:39:35.440574 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:39:35.440583 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:39:35 GMT
	I0131 02:39:35.441254 1436700 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"941"},"items":[{"metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"906","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I0131 02:39:35.441854 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:39:35.441875 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:39:35.441886 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:39:35.441890 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:39:35.441893 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:39:35.441900 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:39:35.441903 1436700 node_conditions.go:105] duration metric: took 182.059877ms to run NodePressure ...
	I0131 02:39:35.441916 1436700 start.go:228] waiting for startup goroutines ...
	I0131 02:39:35.441931 1436700 start.go:233] waiting for cluster config update ...
	I0131 02:39:35.441938 1436700 start.go:242] writing updated cluster config ...
	I0131 02:39:35.442475 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:39:35.442606 1436700 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/config.json ...
	I0131 02:39:35.445911 1436700 out.go:177] * Starting worker node multinode-263108-m02 in cluster multinode-263108
	I0131 02:39:35.447163 1436700 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:39:35.447188 1436700 cache.go:56] Caching tarball of preloaded images
	I0131 02:39:35.447271 1436700 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 02:39:35.447283 1436700 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 02:39:35.447374 1436700 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/config.json ...
	I0131 02:39:35.447534 1436700 start.go:365] acquiring machines lock for multinode-263108-m02: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:39:35.447582 1436700 start.go:369] acquired machines lock for "multinode-263108-m02" in 27.393µs
	I0131 02:39:35.447597 1436700 start.go:96] Skipping create...Using existing machine configuration
	I0131 02:39:35.447605 1436700 fix.go:54] fixHost starting: m02
	I0131 02:39:35.447886 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:39:35.447909 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:39:35.462915 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0131 02:39:35.463398 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:39:35.463923 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:39:35.463946 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:39:35.464273 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:39:35.464497 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:39:35.464689 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetState
	I0131 02:39:35.466511 1436700 fix.go:102] recreateIfNeeded on multinode-263108-m02: state=Running err=<nil>
	W0131 02:39:35.466530 1436700 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 02:39:35.468565 1436700 out.go:177] * Updating the running kvm2 "multinode-263108-m02" VM ...
	I0131 02:39:35.469842 1436700 machine.go:88] provisioning docker machine ...
	I0131 02:39:35.469866 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:39:35.470131 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetMachineName
	I0131 02:39:35.470310 1436700 buildroot.go:166] provisioning hostname "multinode-263108-m02"
	I0131 02:39:35.470333 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetMachineName
	I0131 02:39:35.470475 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:39:35.472896 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.473381 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:39:35.473415 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.473554 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:39:35.473724 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:35.473869 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:35.474038 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:39:35.474209 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:39:35.474647 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0131 02:39:35.474680 1436700 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-263108-m02 && echo "multinode-263108-m02" | sudo tee /etc/hostname
	I0131 02:39:35.608370 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263108-m02
	
	I0131 02:39:35.608412 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:39:35.611224 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.611631 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:39:35.611665 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.611862 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:39:35.612112 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:35.612285 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:35.612464 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:39:35.612635 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:39:35.612954 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0131 02:39:35.612971 1436700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-263108-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-263108-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-263108-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 02:39:35.731142 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:39:35.731182 1436700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 02:39:35.731202 1436700 buildroot.go:174] setting up certificates
	I0131 02:39:35.731222 1436700 provision.go:83] configureAuth start
	I0131 02:39:35.731237 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetMachineName
	I0131 02:39:35.731593 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetIP
	I0131 02:39:35.734095 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.734501 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:39:35.734535 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.734669 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:39:35.737291 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.737661 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:39:35.737695 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:35.737843 1436700 provision.go:138] copyHostCerts
	I0131 02:39:35.737878 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:39:35.737912 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 02:39:35.737926 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:39:35.737992 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 02:39:35.738079 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:39:35.738097 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 02:39:35.738108 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:39:35.738136 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 02:39:35.738190 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:39:35.738208 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 02:39:35.738212 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:39:35.738233 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 02:39:35.738294 1436700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.multinode-263108-m02 san=[192.168.39.60 192.168.39.60 localhost 127.0.0.1 minikube multinode-263108-m02]
	I0131 02:39:36.062603 1436700 provision.go:172] copyRemoteCerts
	I0131 02:39:36.062680 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 02:39:36.062710 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:39:36.065554 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:36.065980 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:39:36.066025 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:36.066266 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:39:36.066552 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:36.066748 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:39:36.066967 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m02/id_rsa Username:docker}
	I0131 02:39:36.154943 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0131 02:39:36.155100 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 02:39:36.178662 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0131 02:39:36.178725 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0131 02:39:36.199717 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0131 02:39:36.199795 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 02:39:36.222689 1436700 provision.go:86] duration metric: configureAuth took 491.451809ms
	I0131 02:39:36.222717 1436700 buildroot.go:189] setting minikube options for container-runtime
	I0131 02:39:36.222962 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:39:36.223038 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:39:36.225762 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:36.226103 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:39:36.226153 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:39:36.226282 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:39:36.226469 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:36.226679 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:39:36.226868 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:39:36.227069 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:39:36.227447 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0131 02:39:36.227465 1436700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 02:41:06.884579 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 02:41:06.884613 1436700 machine.go:91] provisioned docker machine in 1m31.414755073s
	I0131 02:41:06.884631 1436700 start.go:300] post-start starting for "multinode-263108-m02" (driver="kvm2")
	I0131 02:41:06.884648 1436700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 02:41:06.884681 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:41:06.885179 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 02:41:06.885251 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:41:06.888079 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:06.888590 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:41:06.888610 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:06.888827 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:41:06.889051 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:41:06.889252 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:41:06.889385 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m02/id_rsa Username:docker}
	I0131 02:41:06.980307 1436700 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 02:41:06.984539 1436700 command_runner.go:130] > NAME=Buildroot
	I0131 02:41:06.984559 1436700 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0131 02:41:06.984564 1436700 command_runner.go:130] > ID=buildroot
	I0131 02:41:06.984569 1436700 command_runner.go:130] > VERSION_ID=2021.02.12
	I0131 02:41:06.984574 1436700 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0131 02:41:06.984643 1436700 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 02:41:06.984674 1436700 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 02:41:06.984774 1436700 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 02:41:06.984845 1436700 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 02:41:06.984856 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /etc/ssl/certs/14199762.pem
	I0131 02:41:06.984949 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 02:41:06.993161 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:41:07.016305 1436700 start.go:303] post-start completed in 131.65398ms
	I0131 02:41:07.016339 1436700 fix.go:56] fixHost completed within 1m31.56873252s
	I0131 02:41:07.016364 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:41:07.019217 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.019581 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:41:07.019634 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.019769 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:41:07.019989 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:41:07.020129 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:41:07.020258 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:41:07.020477 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:41:07.020828 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.60 22 <nil> <nil>}
	I0131 02:41:07.020844 1436700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 02:41:07.138954 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706668867.130090112
	
	I0131 02:41:07.138977 1436700 fix.go:206] guest clock: 1706668867.130090112
	I0131 02:41:07.138987 1436700 fix.go:219] Guest: 2024-01-31 02:41:07.130090112 +0000 UTC Remote: 2024-01-31 02:41:07.016343804 +0000 UTC m=+451.088818220 (delta=113.746308ms)
	I0131 02:41:07.139017 1436700 fix.go:190] guest clock delta is within tolerance: 113.746308ms
	I0131 02:41:07.139023 1436700 start.go:83] releasing machines lock for "multinode-263108-m02", held for 1m31.691431803s
	I0131 02:41:07.139044 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:41:07.139370 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetIP
	I0131 02:41:07.142028 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.142371 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:41:07.142403 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.144498 1436700 out.go:177] * Found network options:
	I0131 02:41:07.145923 1436700 out.go:177]   - NO_PROXY=192.168.39.109
	W0131 02:41:07.147100 1436700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0131 02:41:07.147145 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:41:07.147668 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:41:07.147831 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:41:07.147933 1436700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 02:41:07.147978 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	W0131 02:41:07.147992 1436700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0131 02:41:07.148072 1436700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 02:41:07.148098 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:41:07.150629 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.151104 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:41:07.151133 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.151167 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.151329 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:41:07.151523 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:41:07.151678 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:41:07.151679 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:41:07.151723 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:07.151879 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m02/id_rsa Username:docker}
	I0131 02:41:07.151948 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:41:07.152103 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:41:07.152276 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:41:07.152435 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m02/id_rsa Username:docker}
	I0131 02:41:07.271284 1436700 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0131 02:41:07.380109 1436700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0131 02:41:07.385629 1436700 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0131 02:41:07.385977 1436700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 02:41:07.386046 1436700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 02:41:07.394111 1436700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0131 02:41:07.394132 1436700 start.go:475] detecting cgroup driver to use...
	I0131 02:41:07.394196 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 02:41:07.407304 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 02:41:07.418976 1436700 docker.go:217] disabling cri-docker service (if available) ...
	I0131 02:41:07.419035 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 02:41:07.430631 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 02:41:07.442170 1436700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 02:41:07.560639 1436700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 02:41:07.677627 1436700 docker.go:233] disabling docker service ...
	I0131 02:41:07.677714 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 02:41:07.692278 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 02:41:07.704413 1436700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 02:41:07.821482 1436700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 02:41:07.937221 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 02:41:07.949168 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 02:41:07.967374 1436700 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0131 02:41:07.967424 1436700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 02:41:07.967478 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:41:07.976500 1436700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 02:41:07.976575 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:41:07.985564 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:41:07.994473 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:41:08.002822 1436700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 02:41:08.012612 1436700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 02:41:08.020292 1436700 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0131 02:41:08.020457 1436700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 02:41:08.028821 1436700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 02:41:08.144097 1436700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 02:41:08.359872 1436700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 02:41:08.359966 1436700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 02:41:08.364901 1436700 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0131 02:41:08.364922 1436700 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0131 02:41:08.364929 1436700 command_runner.go:130] > Device: 16h/22d	Inode: 1263        Links: 1
	I0131 02:41:08.364937 1436700 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0131 02:41:08.364948 1436700 command_runner.go:130] > Access: 2024-01-31 02:41:08.286750608 +0000
	I0131 02:41:08.364960 1436700 command_runner.go:130] > Modify: 2024-01-31 02:41:08.286750608 +0000
	I0131 02:41:08.364968 1436700 command_runner.go:130] > Change: 2024-01-31 02:41:08.286750608 +0000
	I0131 02:41:08.364978 1436700 command_runner.go:130] >  Birth: -
	I0131 02:41:08.365252 1436700 start.go:543] Will wait 60s for crictl version
	I0131 02:41:08.365314 1436700 ssh_runner.go:195] Run: which crictl
	I0131 02:41:08.368547 1436700 command_runner.go:130] > /usr/bin/crictl
	I0131 02:41:08.368748 1436700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 02:41:08.407060 1436700 command_runner.go:130] > Version:  0.1.0
	I0131 02:41:08.407088 1436700 command_runner.go:130] > RuntimeName:  cri-o
	I0131 02:41:08.407095 1436700 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0131 02:41:08.407102 1436700 command_runner.go:130] > RuntimeApiVersion:  v1
	I0131 02:41:08.407268 1436700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 02:41:08.407357 1436700 ssh_runner.go:195] Run: crio --version
	I0131 02:41:08.457548 1436700 command_runner.go:130] > crio version 1.24.1
	I0131 02:41:08.457579 1436700 command_runner.go:130] > Version:          1.24.1
	I0131 02:41:08.457590 1436700 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0131 02:41:08.457614 1436700 command_runner.go:130] > GitTreeState:     dirty
	I0131 02:41:08.457624 1436700 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0131 02:41:08.457631 1436700 command_runner.go:130] > GoVersion:        go1.19.9
	I0131 02:41:08.457635 1436700 command_runner.go:130] > Compiler:         gc
	I0131 02:41:08.457640 1436700 command_runner.go:130] > Platform:         linux/amd64
	I0131 02:41:08.457645 1436700 command_runner.go:130] > Linkmode:         dynamic
	I0131 02:41:08.457656 1436700 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0131 02:41:08.457668 1436700 command_runner.go:130] > SeccompEnabled:   true
	I0131 02:41:08.457675 1436700 command_runner.go:130] > AppArmorEnabled:  false
	I0131 02:41:08.457809 1436700 ssh_runner.go:195] Run: crio --version
	I0131 02:41:08.498051 1436700 command_runner.go:130] > crio version 1.24.1
	I0131 02:41:08.498080 1436700 command_runner.go:130] > Version:          1.24.1
	I0131 02:41:08.498089 1436700 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0131 02:41:08.498095 1436700 command_runner.go:130] > GitTreeState:     dirty
	I0131 02:41:08.498104 1436700 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0131 02:41:08.498111 1436700 command_runner.go:130] > GoVersion:        go1.19.9
	I0131 02:41:08.498117 1436700 command_runner.go:130] > Compiler:         gc
	I0131 02:41:08.498124 1436700 command_runner.go:130] > Platform:         linux/amd64
	I0131 02:41:08.498132 1436700 command_runner.go:130] > Linkmode:         dynamic
	I0131 02:41:08.498144 1436700 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0131 02:41:08.498166 1436700 command_runner.go:130] > SeccompEnabled:   true
	I0131 02:41:08.498178 1436700 command_runner.go:130] > AppArmorEnabled:  false
	I0131 02:41:08.500200 1436700 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 02:41:08.501590 1436700 out.go:177]   - env NO_PROXY=192.168.39.109
	I0131 02:41:08.502883 1436700 main.go:141] libmachine: (multinode-263108-m02) Calling .GetIP
	I0131 02:41:08.505564 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:08.505953 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:41:08.505975 1436700 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:41:08.506278 1436700 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 02:41:08.510600 1436700 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0131 02:41:08.510922 1436700 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108 for IP: 192.168.39.60
	I0131 02:41:08.510947 1436700 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:41:08.511126 1436700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 02:41:08.511177 1436700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 02:41:08.511195 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0131 02:41:08.511212 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0131 02:41:08.511230 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0131 02:41:08.511244 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0131 02:41:08.511314 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 02:41:08.511355 1436700 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 02:41:08.511371 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 02:41:08.511407 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 02:41:08.511445 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 02:41:08.511481 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 02:41:08.511536 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:41:08.511575 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /usr/share/ca-certificates/14199762.pem
	I0131 02:41:08.511597 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:41:08.511615 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem -> /usr/share/ca-certificates/1419976.pem
	I0131 02:41:08.512087 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 02:41:08.540131 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 02:41:08.561905 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 02:41:08.583446 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 02:41:08.605752 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 02:41:08.628886 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 02:41:08.649996 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 02:41:08.671949 1436700 ssh_runner.go:195] Run: openssl version
	I0131 02:41:08.677216 1436700 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0131 02:41:08.677282 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 02:41:08.686334 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 02:41:08.690846 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:41:08.690948 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:41:08.691007 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 02:41:08.695848 1436700 command_runner.go:130] > 3ec20f2e
	I0131 02:41:08.696042 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 02:41:08.703627 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 02:41:08.712308 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:41:08.716424 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:41:08.716479 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:41:08.716517 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:41:08.721650 1436700 command_runner.go:130] > b5213941
	I0131 02:41:08.721706 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 02:41:08.729367 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 02:41:08.738212 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 02:41:08.742084 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:41:08.742225 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:41:08.742280 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 02:41:08.747192 1436700 command_runner.go:130] > 51391683
	I0131 02:41:08.747415 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 02:41:08.755329 1436700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 02:41:08.759019 1436700 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 02:41:08.759056 1436700 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 02:41:08.759170 1436700 ssh_runner.go:195] Run: crio config
	I0131 02:41:08.809198 1436700 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0131 02:41:08.809226 1436700 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0131 02:41:08.809236 1436700 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0131 02:41:08.809242 1436700 command_runner.go:130] > #
	I0131 02:41:08.809255 1436700 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0131 02:41:08.809265 1436700 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0131 02:41:08.809277 1436700 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0131 02:41:08.809292 1436700 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0131 02:41:08.809297 1436700 command_runner.go:130] > # reload'.
	I0131 02:41:08.809304 1436700 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0131 02:41:08.809318 1436700 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0131 02:41:08.809330 1436700 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0131 02:41:08.809345 1436700 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0131 02:41:08.809351 1436700 command_runner.go:130] > [crio]
	I0131 02:41:08.809364 1436700 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0131 02:41:08.809371 1436700 command_runner.go:130] > # containers images, in this directory.
	I0131 02:41:08.809377 1436700 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0131 02:41:08.809389 1436700 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0131 02:41:08.809398 1436700 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0131 02:41:08.809409 1436700 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0131 02:41:08.809424 1436700 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0131 02:41:08.809435 1436700 command_runner.go:130] > storage_driver = "overlay"
	I0131 02:41:08.809448 1436700 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0131 02:41:08.809460 1436700 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0131 02:41:08.809470 1436700 command_runner.go:130] > storage_option = [
	I0131 02:41:08.809604 1436700 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0131 02:41:08.809620 1436700 command_runner.go:130] > ]
	I0131 02:41:08.809633 1436700 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0131 02:41:08.809643 1436700 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0131 02:41:08.809654 1436700 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0131 02:41:08.809665 1436700 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0131 02:41:08.809677 1436700 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0131 02:41:08.809687 1436700 command_runner.go:130] > # always happen on a node reboot
	I0131 02:41:08.809745 1436700 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0131 02:41:08.809768 1436700 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0131 02:41:08.809779 1436700 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0131 02:41:08.809842 1436700 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0131 02:41:08.809857 1436700 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0131 02:41:08.809870 1436700 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0131 02:41:08.809889 1436700 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0131 02:41:08.809900 1436700 command_runner.go:130] > # internal_wipe = true
	I0131 02:41:08.809913 1436700 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0131 02:41:08.809927 1436700 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0131 02:41:08.809941 1436700 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0131 02:41:08.809954 1436700 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0131 02:41:08.809968 1436700 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0131 02:41:08.809978 1436700 command_runner.go:130] > [crio.api]
	I0131 02:41:08.809988 1436700 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0131 02:41:08.809999 1436700 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0131 02:41:08.810012 1436700 command_runner.go:130] > # IP address on which the stream server will listen.
	I0131 02:41:08.810031 1436700 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0131 02:41:08.810044 1436700 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0131 02:41:08.810057 1436700 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0131 02:41:08.810071 1436700 command_runner.go:130] > # stream_port = "0"
	I0131 02:41:08.810084 1436700 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0131 02:41:08.810094 1436700 command_runner.go:130] > # stream_enable_tls = false
	I0131 02:41:08.810108 1436700 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0131 02:41:08.810119 1436700 command_runner.go:130] > # stream_idle_timeout = ""
	I0131 02:41:08.810133 1436700 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0131 02:41:08.810148 1436700 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0131 02:41:08.810157 1436700 command_runner.go:130] > # minutes.
	I0131 02:41:08.810167 1436700 command_runner.go:130] > # stream_tls_cert = ""
	I0131 02:41:08.810179 1436700 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0131 02:41:08.810194 1436700 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0131 02:41:08.810205 1436700 command_runner.go:130] > # stream_tls_key = ""
	I0131 02:41:08.810217 1436700 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0131 02:41:08.810232 1436700 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0131 02:41:08.810245 1436700 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0131 02:41:08.810256 1436700 command_runner.go:130] > # stream_tls_ca = ""
	I0131 02:41:08.810270 1436700 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0131 02:41:08.810282 1436700 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0131 02:41:08.810298 1436700 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0131 02:41:08.810309 1436700 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0131 02:41:08.810331 1436700 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0131 02:41:08.810345 1436700 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0131 02:41:08.810354 1436700 command_runner.go:130] > [crio.runtime]
	I0131 02:41:08.810366 1436700 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0131 02:41:08.810379 1436700 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0131 02:41:08.810389 1436700 command_runner.go:130] > # "nofile=1024:2048"
	I0131 02:41:08.810403 1436700 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0131 02:41:08.810414 1436700 command_runner.go:130] > # default_ulimits = [
	I0131 02:41:08.810421 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.810435 1436700 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0131 02:41:08.810446 1436700 command_runner.go:130] > # no_pivot = false
	I0131 02:41:08.810457 1436700 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0131 02:41:08.810471 1436700 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0131 02:41:08.810493 1436700 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0131 02:41:08.810507 1436700 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0131 02:41:08.810518 1436700 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0131 02:41:08.810534 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0131 02:41:08.810545 1436700 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0131 02:41:08.810557 1436700 command_runner.go:130] > # Cgroup setting for conmon
	I0131 02:41:08.810572 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0131 02:41:08.810584 1436700 command_runner.go:130] > conmon_cgroup = "pod"
	I0131 02:41:08.810597 1436700 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0131 02:41:08.810607 1436700 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0131 02:41:08.810622 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0131 02:41:08.810634 1436700 command_runner.go:130] > conmon_env = [
	I0131 02:41:08.810648 1436700 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0131 02:41:08.810665 1436700 command_runner.go:130] > ]
	I0131 02:41:08.810679 1436700 command_runner.go:130] > # Additional environment variables to set for all the
	I0131 02:41:08.810689 1436700 command_runner.go:130] > # containers. These are overridden if set in the
	I0131 02:41:08.810699 1436700 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0131 02:41:08.810706 1436700 command_runner.go:130] > # default_env = [
	I0131 02:41:08.810715 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.810725 1436700 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0131 02:41:08.810735 1436700 command_runner.go:130] > # selinux = false
	I0131 02:41:08.810746 1436700 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0131 02:41:08.810760 1436700 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0131 02:41:08.810769 1436700 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0131 02:41:08.810776 1436700 command_runner.go:130] > # seccomp_profile = ""
	I0131 02:41:08.810790 1436700 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0131 02:41:08.810803 1436700 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0131 02:41:08.810816 1436700 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0131 02:41:08.810827 1436700 command_runner.go:130] > # which might increase security.
	I0131 02:41:08.810837 1436700 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0131 02:41:08.810851 1436700 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0131 02:41:08.810866 1436700 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0131 02:41:08.810880 1436700 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0131 02:41:08.810892 1436700 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0131 02:41:08.810904 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:41:08.810915 1436700 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0131 02:41:08.810927 1436700 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0131 02:41:08.810938 1436700 command_runner.go:130] > # the cgroup blockio controller.
	I0131 02:41:08.810948 1436700 command_runner.go:130] > # blockio_config_file = ""
	I0131 02:41:08.810956 1436700 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0131 02:41:08.810965 1436700 command_runner.go:130] > # irqbalance daemon.
	I0131 02:41:08.810975 1436700 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0131 02:41:08.810990 1436700 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0131 02:41:08.811002 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:41:08.811012 1436700 command_runner.go:130] > # rdt_config_file = ""
	I0131 02:41:08.811024 1436700 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0131 02:41:08.811035 1436700 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0131 02:41:08.811050 1436700 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0131 02:41:08.811059 1436700 command_runner.go:130] > # separate_pull_cgroup = ""
	I0131 02:41:08.811067 1436700 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0131 02:41:08.811081 1436700 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0131 02:41:08.811091 1436700 command_runner.go:130] > # will be added.
	I0131 02:41:08.811099 1436700 command_runner.go:130] > # default_capabilities = [
	I0131 02:41:08.811109 1436700 command_runner.go:130] > # 	"CHOWN",
	I0131 02:41:08.811118 1436700 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0131 02:41:08.811128 1436700 command_runner.go:130] > # 	"FSETID",
	I0131 02:41:08.811138 1436700 command_runner.go:130] > # 	"FOWNER",
	I0131 02:41:08.811145 1436700 command_runner.go:130] > # 	"SETGID",
	I0131 02:41:08.811154 1436700 command_runner.go:130] > # 	"SETUID",
	I0131 02:41:08.811161 1436700 command_runner.go:130] > # 	"SETPCAP",
	I0131 02:41:08.811170 1436700 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0131 02:41:08.811177 1436700 command_runner.go:130] > # 	"KILL",
	I0131 02:41:08.811187 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.811199 1436700 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0131 02:41:08.811213 1436700 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0131 02:41:08.811223 1436700 command_runner.go:130] > # default_sysctls = [
	I0131 02:41:08.811229 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.811239 1436700 command_runner.go:130] > # List of devices on the host that a
	I0131 02:41:08.811254 1436700 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0131 02:41:08.811265 1436700 command_runner.go:130] > # allowed_devices = [
	I0131 02:41:08.811274 1436700 command_runner.go:130] > # 	"/dev/fuse",
	I0131 02:41:08.811281 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.811288 1436700 command_runner.go:130] > # List of additional devices. specified as
	I0131 02:41:08.811302 1436700 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0131 02:41:08.811315 1436700 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0131 02:41:08.811342 1436700 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0131 02:41:08.811352 1436700 command_runner.go:130] > # additional_devices = [
	I0131 02:41:08.811362 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.811373 1436700 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0131 02:41:08.811384 1436700 command_runner.go:130] > # cdi_spec_dirs = [
	I0131 02:41:08.811391 1436700 command_runner.go:130] > # 	"/etc/cdi",
	I0131 02:41:08.811398 1436700 command_runner.go:130] > # 	"/var/run/cdi",
	I0131 02:41:08.811404 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.811416 1436700 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0131 02:41:08.811429 1436700 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0131 02:41:08.811437 1436700 command_runner.go:130] > # Defaults to false.
	I0131 02:41:08.811449 1436700 command_runner.go:130] > # device_ownership_from_security_context = false
	I0131 02:41:08.811461 1436700 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0131 02:41:08.811474 1436700 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0131 02:41:08.811485 1436700 command_runner.go:130] > # hooks_dir = [
	I0131 02:41:08.811494 1436700 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0131 02:41:08.811500 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.811513 1436700 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0131 02:41:08.811525 1436700 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0131 02:41:08.811537 1436700 command_runner.go:130] > # its default mounts from the following two files:
	I0131 02:41:08.811546 1436700 command_runner.go:130] > #
	I0131 02:41:08.811557 1436700 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0131 02:41:08.811570 1436700 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0131 02:41:08.811579 1436700 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0131 02:41:08.811585 1436700 command_runner.go:130] > #
	I0131 02:41:08.811596 1436700 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0131 02:41:08.811611 1436700 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0131 02:41:08.811623 1436700 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0131 02:41:08.811635 1436700 command_runner.go:130] > #      only add mounts it finds in this file.
	I0131 02:41:08.811643 1436700 command_runner.go:130] > #
	I0131 02:41:08.811651 1436700 command_runner.go:130] > # default_mounts_file = ""
	I0131 02:41:08.811669 1436700 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0131 02:41:08.811683 1436700 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0131 02:41:08.811692 1436700 command_runner.go:130] > pids_limit = 1024
	I0131 02:41:08.811703 1436700 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0131 02:41:08.811716 1436700 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0131 02:41:08.811729 1436700 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0131 02:41:08.811747 1436700 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0131 02:41:08.811756 1436700 command_runner.go:130] > # log_size_max = -1
	I0131 02:41:08.811767 1436700 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0131 02:41:08.811777 1436700 command_runner.go:130] > # log_to_journald = false
	I0131 02:41:08.811790 1436700 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0131 02:41:08.811801 1436700 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0131 02:41:08.811813 1436700 command_runner.go:130] > # Path to directory for container attach sockets.
	I0131 02:41:08.811825 1436700 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0131 02:41:08.811838 1436700 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0131 02:41:08.811848 1436700 command_runner.go:130] > # bind_mount_prefix = ""
	I0131 02:41:08.811860 1436700 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0131 02:41:08.811867 1436700 command_runner.go:130] > # read_only = false
	I0131 02:41:08.811881 1436700 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0131 02:41:08.811894 1436700 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0131 02:41:08.811905 1436700 command_runner.go:130] > # live configuration reload.
	I0131 02:41:08.811915 1436700 command_runner.go:130] > # log_level = "info"
	I0131 02:41:08.811924 1436700 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0131 02:41:08.811933 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:41:08.811939 1436700 command_runner.go:130] > # log_filter = ""
	I0131 02:41:08.811952 1436700 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0131 02:41:08.811967 1436700 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0131 02:41:08.811975 1436700 command_runner.go:130] > # separated by comma.
	I0131 02:41:08.811986 1436700 command_runner.go:130] > # uid_mappings = ""
	I0131 02:41:08.811998 1436700 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0131 02:41:08.812011 1436700 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0131 02:41:08.812021 1436700 command_runner.go:130] > # separated by comma.
	I0131 02:41:08.812028 1436700 command_runner.go:130] > # gid_mappings = ""
	I0131 02:41:08.812037 1436700 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0131 02:41:08.812046 1436700 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0131 02:41:08.812061 1436700 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0131 02:41:08.812072 1436700 command_runner.go:130] > # minimum_mappable_uid = -1
	I0131 02:41:08.812086 1436700 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0131 02:41:08.812099 1436700 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0131 02:41:08.812112 1436700 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0131 02:41:08.812120 1436700 command_runner.go:130] > # minimum_mappable_gid = -1
	I0131 02:41:08.812127 1436700 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0131 02:41:08.812154 1436700 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0131 02:41:08.812168 1436700 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0131 02:41:08.812175 1436700 command_runner.go:130] > # ctr_stop_timeout = 30
	I0131 02:41:08.812188 1436700 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0131 02:41:08.812199 1436700 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0131 02:41:08.812210 1436700 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0131 02:41:08.812222 1436700 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0131 02:41:08.812232 1436700 command_runner.go:130] > drop_infra_ctr = false
	I0131 02:41:08.812242 1436700 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0131 02:41:08.812254 1436700 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0131 02:41:08.812270 1436700 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0131 02:41:08.812281 1436700 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0131 02:41:08.812292 1436700 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0131 02:41:08.812304 1436700 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0131 02:41:08.812314 1436700 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0131 02:41:08.812328 1436700 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0131 02:41:08.812336 1436700 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0131 02:41:08.812344 1436700 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0131 02:41:08.812358 1436700 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0131 02:41:08.812372 1436700 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0131 02:41:08.812383 1436700 command_runner.go:130] > # default_runtime = "runc"
	I0131 02:41:08.812393 1436700 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0131 02:41:08.812408 1436700 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0131 02:41:08.812427 1436700 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0131 02:41:08.812438 1436700 command_runner.go:130] > # creation as a file is not desired either.
	I0131 02:41:08.812451 1436700 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0131 02:41:08.812465 1436700 command_runner.go:130] > # the hostname is being managed dynamically.
	I0131 02:41:08.812477 1436700 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0131 02:41:08.812485 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.812496 1436700 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0131 02:41:08.812510 1436700 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0131 02:41:08.812521 1436700 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0131 02:41:08.812534 1436700 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0131 02:41:08.812543 1436700 command_runner.go:130] > #
	I0131 02:41:08.812550 1436700 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0131 02:41:08.812561 1436700 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0131 02:41:08.812571 1436700 command_runner.go:130] > #  runtime_type = "oci"
	I0131 02:41:08.812581 1436700 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0131 02:41:08.812592 1436700 command_runner.go:130] > #  privileged_without_host_devices = false
	I0131 02:41:08.812602 1436700 command_runner.go:130] > #  allowed_annotations = []
	I0131 02:41:08.812612 1436700 command_runner.go:130] > # Where:
	I0131 02:41:08.812621 1436700 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0131 02:41:08.812630 1436700 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0131 02:41:08.812638 1436700 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0131 02:41:08.812649 1436700 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0131 02:41:08.812659 1436700 command_runner.go:130] > #   in $PATH.
	I0131 02:41:08.812667 1436700 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0131 02:41:08.812675 1436700 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0131 02:41:08.812682 1436700 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0131 02:41:08.812688 1436700 command_runner.go:130] > #   state.
	I0131 02:41:08.812694 1436700 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0131 02:41:08.812702 1436700 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0131 02:41:08.812709 1436700 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0131 02:41:08.812717 1436700 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0131 02:41:08.812723 1436700 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0131 02:41:08.812731 1436700 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0131 02:41:08.812736 1436700 command_runner.go:130] > #   The currently recognized values are:
	I0131 02:41:08.812743 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0131 02:41:08.812752 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0131 02:41:08.812758 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0131 02:41:08.812765 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0131 02:41:08.812772 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0131 02:41:08.812781 1436700 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0131 02:41:08.812787 1436700 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0131 02:41:08.812796 1436700 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0131 02:41:08.812801 1436700 command_runner.go:130] > #   should be moved to the container's cgroup
	I0131 02:41:08.812806 1436700 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0131 02:41:08.812812 1436700 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0131 02:41:08.812818 1436700 command_runner.go:130] > runtime_type = "oci"
	I0131 02:41:08.812822 1436700 command_runner.go:130] > runtime_root = "/run/runc"
	I0131 02:41:08.812828 1436700 command_runner.go:130] > runtime_config_path = ""
	I0131 02:41:08.812832 1436700 command_runner.go:130] > monitor_path = ""
	I0131 02:41:08.812838 1436700 command_runner.go:130] > monitor_cgroup = ""
	I0131 02:41:08.812843 1436700 command_runner.go:130] > monitor_exec_cgroup = ""
	I0131 02:41:08.812851 1436700 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0131 02:41:08.812857 1436700 command_runner.go:130] > # running containers
	I0131 02:41:08.812867 1436700 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0131 02:41:08.812880 1436700 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0131 02:41:08.812916 1436700 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0131 02:41:08.812930 1436700 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0131 02:41:08.812941 1436700 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0131 02:41:08.812951 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0131 02:41:08.812963 1436700 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0131 02:41:08.812973 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0131 02:41:08.812984 1436700 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0131 02:41:08.812994 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0131 02:41:08.813005 1436700 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0131 02:41:08.813016 1436700 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0131 02:41:08.813027 1436700 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0131 02:41:08.813035 1436700 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0131 02:41:08.813045 1436700 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0131 02:41:08.813051 1436700 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0131 02:41:08.813061 1436700 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0131 02:41:08.813071 1436700 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0131 02:41:08.813079 1436700 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0131 02:41:08.813088 1436700 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0131 02:41:08.813094 1436700 command_runner.go:130] > # Example:
	I0131 02:41:08.813099 1436700 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0131 02:41:08.813106 1436700 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0131 02:41:08.813111 1436700 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0131 02:41:08.813119 1436700 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0131 02:41:08.813122 1436700 command_runner.go:130] > # cpuset = 0
	I0131 02:41:08.813127 1436700 command_runner.go:130] > # cpushares = "0-1"
	I0131 02:41:08.813133 1436700 command_runner.go:130] > # Where:
	I0131 02:41:08.813137 1436700 command_runner.go:130] > # The workload name is workload-type.
	I0131 02:41:08.813146 1436700 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0131 02:41:08.813152 1436700 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0131 02:41:08.813159 1436700 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0131 02:41:08.813167 1436700 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0131 02:41:08.813175 1436700 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0131 02:41:08.813179 1436700 command_runner.go:130] > # 
	I0131 02:41:08.813187 1436700 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0131 02:41:08.813191 1436700 command_runner.go:130] > #
	I0131 02:41:08.813197 1436700 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0131 02:41:08.813205 1436700 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0131 02:41:08.813212 1436700 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0131 02:41:08.813221 1436700 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0131 02:41:08.813226 1436700 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0131 02:41:08.813233 1436700 command_runner.go:130] > [crio.image]
	I0131 02:41:08.813239 1436700 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0131 02:41:08.813244 1436700 command_runner.go:130] > # default_transport = "docker://"
	I0131 02:41:08.813250 1436700 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0131 02:41:08.813258 1436700 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0131 02:41:08.813286 1436700 command_runner.go:130] > # global_auth_file = ""
	I0131 02:41:08.813303 1436700 command_runner.go:130] > # The image used to instantiate infra containers.
	I0131 02:41:08.813308 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:41:08.813316 1436700 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0131 02:41:08.813322 1436700 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0131 02:41:08.813330 1436700 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0131 02:41:08.813335 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:41:08.813342 1436700 command_runner.go:130] > # pause_image_auth_file = ""
	I0131 02:41:08.813348 1436700 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0131 02:41:08.813356 1436700 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0131 02:41:08.813363 1436700 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0131 02:41:08.813373 1436700 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0131 02:41:08.813380 1436700 command_runner.go:130] > # pause_command = "/pause"
	I0131 02:41:08.813386 1436700 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0131 02:41:08.813395 1436700 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0131 02:41:08.813401 1436700 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0131 02:41:08.813409 1436700 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0131 02:41:08.813414 1436700 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0131 02:41:08.813421 1436700 command_runner.go:130] > # signature_policy = ""
	I0131 02:41:08.813428 1436700 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0131 02:41:08.813437 1436700 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0131 02:41:08.813441 1436700 command_runner.go:130] > # changing them here.
	I0131 02:41:08.813447 1436700 command_runner.go:130] > # insecure_registries = [
	I0131 02:41:08.813451 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.813458 1436700 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0131 02:41:08.813465 1436700 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0131 02:41:08.813469 1436700 command_runner.go:130] > # image_volumes = "mkdir"
	I0131 02:41:08.813477 1436700 command_runner.go:130] > # Temporary directory to use for storing big files
	I0131 02:41:08.813483 1436700 command_runner.go:130] > # big_files_temporary_dir = ""
	I0131 02:41:08.813491 1436700 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0131 02:41:08.813495 1436700 command_runner.go:130] > # CNI plugins.
	I0131 02:41:08.813502 1436700 command_runner.go:130] > [crio.network]
	I0131 02:41:08.813508 1436700 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0131 02:41:08.813515 1436700 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0131 02:41:08.813520 1436700 command_runner.go:130] > # cni_default_network = ""
	I0131 02:41:08.813528 1436700 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0131 02:41:08.813533 1436700 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0131 02:41:08.813541 1436700 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0131 02:41:08.813545 1436700 command_runner.go:130] > # plugin_dirs = [
	I0131 02:41:08.813551 1436700 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0131 02:41:08.813555 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.813561 1436700 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0131 02:41:08.813567 1436700 command_runner.go:130] > [crio.metrics]
	I0131 02:41:08.813572 1436700 command_runner.go:130] > # Globally enable or disable metrics support.
	I0131 02:41:08.813578 1436700 command_runner.go:130] > enable_metrics = true
	I0131 02:41:08.813583 1436700 command_runner.go:130] > # Specify enabled metrics collectors.
	I0131 02:41:08.813591 1436700 command_runner.go:130] > # Per default all metrics are enabled.
	I0131 02:41:08.813597 1436700 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0131 02:41:08.813605 1436700 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0131 02:41:08.813611 1436700 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0131 02:41:08.813617 1436700 command_runner.go:130] > # metrics_collectors = [
	I0131 02:41:08.813621 1436700 command_runner.go:130] > # 	"operations",
	I0131 02:41:08.813626 1436700 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0131 02:41:08.813631 1436700 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0131 02:41:08.813635 1436700 command_runner.go:130] > # 	"operations_errors",
	I0131 02:41:08.813644 1436700 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0131 02:41:08.813648 1436700 command_runner.go:130] > # 	"image_pulls_by_name",
	I0131 02:41:08.813659 1436700 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0131 02:41:08.813666 1436700 command_runner.go:130] > # 	"image_pulls_failures",
	I0131 02:41:08.813670 1436700 command_runner.go:130] > # 	"image_pulls_successes",
	I0131 02:41:08.813675 1436700 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0131 02:41:08.813679 1436700 command_runner.go:130] > # 	"image_layer_reuse",
	I0131 02:41:08.813684 1436700 command_runner.go:130] > # 	"containers_oom_total",
	I0131 02:41:08.813688 1436700 command_runner.go:130] > # 	"containers_oom",
	I0131 02:41:08.813696 1436700 command_runner.go:130] > # 	"processes_defunct",
	I0131 02:41:08.813700 1436700 command_runner.go:130] > # 	"operations_total",
	I0131 02:41:08.813707 1436700 command_runner.go:130] > # 	"operations_latency_seconds",
	I0131 02:41:08.813712 1436700 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0131 02:41:08.813718 1436700 command_runner.go:130] > # 	"operations_errors_total",
	I0131 02:41:08.813723 1436700 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0131 02:41:08.813730 1436700 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0131 02:41:08.813734 1436700 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0131 02:41:08.813739 1436700 command_runner.go:130] > # 	"image_pulls_success_total",
	I0131 02:41:08.813743 1436700 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0131 02:41:08.813748 1436700 command_runner.go:130] > # 	"containers_oom_count_total",
	I0131 02:41:08.813751 1436700 command_runner.go:130] > # ]
	I0131 02:41:08.813756 1436700 command_runner.go:130] > # The port on which the metrics server will listen.
	I0131 02:41:08.813760 1436700 command_runner.go:130] > # metrics_port = 9090
	I0131 02:41:08.813764 1436700 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0131 02:41:08.813768 1436700 command_runner.go:130] > # metrics_socket = ""
	I0131 02:41:08.813773 1436700 command_runner.go:130] > # The certificate for the secure metrics server.
	I0131 02:41:08.813779 1436700 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0131 02:41:08.813785 1436700 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0131 02:41:08.813790 1436700 command_runner.go:130] > # certificate on any modification event.
	I0131 02:41:08.813794 1436700 command_runner.go:130] > # metrics_cert = ""
	I0131 02:41:08.813799 1436700 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0131 02:41:08.813803 1436700 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0131 02:41:08.813807 1436700 command_runner.go:130] > # metrics_key = ""
	I0131 02:41:08.813813 1436700 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0131 02:41:08.813817 1436700 command_runner.go:130] > [crio.tracing]
	I0131 02:41:08.813822 1436700 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0131 02:41:08.813826 1436700 command_runner.go:130] > # enable_tracing = false
	I0131 02:41:08.813831 1436700 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0131 02:41:08.813836 1436700 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0131 02:41:08.813843 1436700 command_runner.go:130] > # Number of samples to collect per million spans.
	I0131 02:41:08.813847 1436700 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0131 02:41:08.813853 1436700 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0131 02:41:08.813856 1436700 command_runner.go:130] > [crio.stats]
	I0131 02:41:08.813862 1436700 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0131 02:41:08.813868 1436700 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0131 02:41:08.813873 1436700 command_runner.go:130] > # stats_collection_period = 0
	I0131 02:41:08.813922 1436700 command_runner.go:130] ! time="2024-01-31 02:41:08.798540969Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0131 02:41:08.813947 1436700 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0131 02:41:08.814239 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:41:08.814250 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:41:08.814290 1436700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 02:41:08.814321 1436700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.60 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-263108 NodeName:multinode-263108-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.60 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 02:41:08.814512 1436700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.60
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-263108-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.60
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 02:41:08.814590 1436700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-263108-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.60
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 02:41:08.814666 1436700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 02:41:08.823214 1436700 command_runner.go:130] > kubeadm
	I0131 02:41:08.823235 1436700 command_runner.go:130] > kubectl
	I0131 02:41:08.823241 1436700 command_runner.go:130] > kubelet
	I0131 02:41:08.823260 1436700 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 02:41:08.823321 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0131 02:41:08.831186 1436700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0131 02:41:08.846417 1436700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 02:41:08.861317 1436700 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0131 02:41:08.864687 1436700 command_runner.go:130] > 192.168.39.109	control-plane.minikube.internal
	I0131 02:41:08.864930 1436700 host.go:66] Checking if "multinode-263108" exists ...
	I0131 02:41:08.865242 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:41:08.865396 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:41:08.865430 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:41:08.880492 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0131 02:41:08.880992 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:41:08.881419 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:41:08.881439 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:41:08.881791 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:41:08.881963 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:41:08.882174 1436700 start.go:304] JoinCluster: &{Name:multinode-263108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:41:08.882302 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0131 02:41:08.882321 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:41:08.885045 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:41:08.885486 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:41:08.885535 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:41:08.885675 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:41:08.885875 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:41:08.886038 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:41:08.886187 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:41:09.061234 1436700 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token oz6m2g.vi9bosdzfqh9nb7k --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 02:41:09.062903 1436700 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0131 02:41:09.062949 1436700 host.go:66] Checking if "multinode-263108" exists ...
	I0131 02:41:09.063288 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:41:09.063327 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:41:09.078944 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42289
	I0131 02:41:09.079431 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:41:09.079879 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:41:09.079905 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:41:09.080261 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:41:09.080452 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:41:09.080655 1436700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-263108-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0131 02:41:09.080685 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:41:09.083950 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:41:09.084389 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:41:09.084416 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:41:09.084598 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:41:09.084808 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:41:09.084992 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:41:09.085215 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:41:09.281145 1436700 command_runner.go:130] > node/multinode-263108-m02 cordoned
	I0131 02:41:12.323378 1436700 command_runner.go:130] > pod "busybox-5b5d89c9d6-9xlwh" has DeletionTimestamp older than 1 seconds, skipping
	I0131 02:41:12.323407 1436700 command_runner.go:130] > node/multinode-263108-m02 drained
	I0131 02:41:12.325033 1436700 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0131 02:41:12.325065 1436700 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zvrh5, kube-system/kube-proxy-x5jb7
	I0131 02:41:12.325096 1436700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-263108-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.244408916s)
	I0131 02:41:12.325114 1436700 node.go:108] successfully drained node "m02"
	I0131 02:41:12.325512 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:41:12.325767 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:41:12.326244 1436700 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0131 02:41:12.326307 1436700 round_trippers.go:463] DELETE https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:41:12.326317 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:12.326324 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:12.326330 1436700 round_trippers.go:473]     Content-Type: application/json
	I0131 02:41:12.326336 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:12.338806 1436700 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0131 02:41:12.338831 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:12.338842 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:12.338849 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:12.338854 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:12.338859 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:12.338864 1436700 round_trippers.go:580]     Content-Length: 171
	I0131 02:41:12.338872 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:12 GMT
	I0131 02:41:12.338878 1436700 round_trippers.go:580]     Audit-Id: a1805fd9-4abf-4d1a-bbca-49c8a42582f9
	I0131 02:41:12.339197 1436700 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-263108-m02","kind":"nodes","uid":"33ce8eca-eb98-4b22-953c-97e57c604ffc"}}
	I0131 02:41:12.339257 1436700 node.go:124] successfully deleted node "m02"
	I0131 02:41:12.339272 1436700 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0131 02:41:12.339300 1436700 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0131 02:41:12.339326 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oz6m2g.vi9bosdzfqh9nb7k --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-263108-m02"
	I0131 02:41:12.389469 1436700 command_runner.go:130] > [preflight] Running pre-flight checks
	I0131 02:41:12.551229 1436700 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0131 02:41:12.551259 1436700 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0131 02:41:12.620460 1436700 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 02:41:12.620489 1436700 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 02:41:12.620670 1436700 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0131 02:41:12.767419 1436700 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0131 02:41:13.293492 1436700 command_runner.go:130] > This node has joined the cluster:
	I0131 02:41:13.293528 1436700 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0131 02:41:13.293539 1436700 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0131 02:41:13.293549 1436700 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0131 02:41:13.296315 1436700 command_runner.go:130] ! W0131 02:41:12.380487    2657 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0131 02:41:13.296348 1436700 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0131 02:41:13.296360 1436700 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0131 02:41:13.296374 1436700 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0131 02:41:13.296410 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0131 02:41:13.554001 1436700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=multinode-263108 minikube.k8s.io/updated_at=2024_01_31T02_41_13_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:41:13.659090 1436700 command_runner.go:130] > node/multinode-263108-m02 labeled
	I0131 02:41:13.659119 1436700 command_runner.go:130] > node/multinode-263108-m03 labeled
	I0131 02:41:13.659233 1436700 start.go:306] JoinCluster complete in 4.777055711s
	I0131 02:41:13.659263 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:41:13.659271 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:41:13.659327 1436700 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0131 02:41:13.664358 1436700 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0131 02:41:13.664389 1436700 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0131 02:41:13.664402 1436700 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0131 02:41:13.664411 1436700 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0131 02:41:13.664423 1436700 command_runner.go:130] > Access: 2024-01-31 02:38:46.128809878 +0000
	I0131 02:41:13.664431 1436700 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0131 02:41:13.664440 1436700 command_runner.go:130] > Change: 2024-01-31 02:38:44.179809878 +0000
	I0131 02:41:13.664455 1436700 command_runner.go:130] >  Birth: -
	I0131 02:41:13.664512 1436700 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0131 02:41:13.664530 1436700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0131 02:41:13.681761 1436700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0131 02:41:14.068938 1436700 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0131 02:41:14.068964 1436700 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0131 02:41:14.068970 1436700 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0131 02:41:14.068975 1436700 command_runner.go:130] > daemonset.apps/kindnet configured
	I0131 02:41:14.069451 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:41:14.069775 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:41:14.070194 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0131 02:41:14.070212 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.070225 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.070236 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.072308 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:14.072324 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.072331 1436700 round_trippers.go:580]     Audit-Id: 8677e5db-abd3-429d-b4ec-4fda26c2c886
	I0131 02:41:14.072336 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.072341 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.072346 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.072351 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.072357 1436700 round_trippers.go:580]     Content-Length: 291
	I0131 02:41:14.072364 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.072395 1436700 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2554d8bc-c0ad-485d-a9be-18a695e4434b","resourceVersion":"933","creationTimestamp":"2024-01-31T02:28:17Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0131 02:41:14.072495 1436700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-263108" context rescaled to 1 replicas
	I0131 02:41:14.072529 1436700 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0131 02:41:14.074568 1436700 out.go:177] * Verifying Kubernetes components...
	I0131 02:41:14.076142 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:41:14.090494 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:41:14.090773 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:41:14.091055 1436700 node_ready.go:35] waiting up to 6m0s for node "multinode-263108-m02" to be "Ready" ...
	I0131 02:41:14.091135 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:41:14.091142 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.091150 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.091156 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.093559 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:14.093592 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.093603 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.093612 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.093621 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.093633 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.093643 1436700 round_trippers.go:580]     Audit-Id: dc5124e0-0c8a-4307-b61b-b59dbb6720bb
	I0131 02:41:14.093656 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.093843 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m02","uid":"a3f4c8e0-3882-4367-a171-122b15d899d3","resourceVersion":"1089","creationTimestamp":"2024-01-31T02:41:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_41_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:41:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:41:14.094221 1436700 node_ready.go:49] node "multinode-263108-m02" has status "Ready":"True"
	I0131 02:41:14.094241 1436700 node_ready.go:38] duration metric: took 3.168842ms waiting for node "multinode-263108-m02" to be "Ready" ...
	I0131 02:41:14.094251 1436700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:41:14.094323 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:41:14.094331 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.094339 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.094345 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.098588 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:41:14.098607 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.098616 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.098624 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.098631 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.098639 1436700 round_trippers.go:580]     Audit-Id: 18ef970f-c18a-4b9a-ba0f-4b3571693b67
	I0131 02:41:14.098648 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.098660 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.100420 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1096"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82230 chars]
	I0131 02:41:14.103024 1436700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.103125 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:41:14.103136 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.103147 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.103159 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.105184 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:14.105204 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.105213 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.105221 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.105228 1436700 round_trippers.go:580]     Audit-Id: 8efa1a27-e7f0-4a98-a7ce-951ba4daf446
	I0131 02:41:14.105236 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.105247 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.105258 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.105383 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0131 02:41:14.105905 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:14.105921 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.105929 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.105937 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.108020 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:14.108036 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.108042 1436700 round_trippers.go:580]     Audit-Id: 3c6f4a15-0f73-48bb-933f-6208ccfb3962
	I0131 02:41:14.108048 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.108053 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.108058 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.108063 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.108069 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.108302 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:41:14.108681 1436700 pod_ready.go:92] pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:14.108708 1436700 pod_ready.go:81] duration metric: took 5.659027ms waiting for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.108724 1436700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.108792 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:41:14.108804 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.108815 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.108827 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.110528 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:41:14.110548 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.110557 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.110563 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.110571 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.110577 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.110586 1436700 round_trippers.go:580]     Audit-Id: 26a5844f-06cd-485d-be5c-7360e76c5129
	I0131 02:41:14.110597 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.110856 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"940","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0131 02:41:14.111166 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:14.111181 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.111192 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.111200 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.112952 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:41:14.112969 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.112978 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.112987 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.112994 1436700 round_trippers.go:580]     Audit-Id: 60e52ef6-5952-4907-be99-93c29d6a50ab
	I0131 02:41:14.113007 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.113017 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.113024 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.113154 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:41:14.113427 1436700 pod_ready.go:92] pod "etcd-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:14.113441 1436700 pod_ready.go:81] duration metric: took 4.709546ms waiting for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.113462 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.113524 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-263108
	I0131 02:41:14.113533 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.113543 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.113553 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.115131 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:41:14.115144 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.115149 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.115155 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.115162 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.115170 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.115178 1436700 round_trippers.go:580]     Audit-Id: 1dcffe7a-fe9b-44b3-8fe2-e831173eb0f1
	I0131 02:41:14.115195 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.115385 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-263108","namespace":"kube-system","uid":"0c527200-696b-4681-af91-226016437113","resourceVersion":"910","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.109:8443","kubernetes.io/config.hash":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.mirror":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.seen":"2024-01-31T02:28:18.078204875Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0131 02:41:14.115793 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:14.115807 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.115816 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.115822 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.117600 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:41:14.117621 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.117630 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.117639 1436700 round_trippers.go:580]     Audit-Id: 41a19aa3-098d-4d1d-b02c-f15fa583650f
	I0131 02:41:14.117645 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.117656 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.117663 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.117668 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.117779 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:41:14.118047 1436700 pod_ready.go:92] pod "kube-apiserver-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:14.118062 1436700 pod_ready.go:81] duration metric: took 4.585177ms waiting for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.118072 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.118119 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-263108
	I0131 02:41:14.118128 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.118139 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.118149 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.119678 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:41:14.119697 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.119707 1436700 round_trippers.go:580]     Audit-Id: 102b94ba-542f-4351-988b-130455884812
	I0131 02:41:14.119716 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.119732 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.119739 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.119744 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.119752 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.119930 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-263108","namespace":"kube-system","uid":"056ea293-6261-4e6c-9b3f-9fdc7d0727a2","resourceVersion":"914","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.mirror":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.seen":"2024-01-31T02:28:18.078205997Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0131 02:41:14.120315 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:14.120331 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.120338 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.120344 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.121956 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:41:14.121972 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.121981 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.121989 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.122003 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.122023 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.122036 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.122041 1436700 round_trippers.go:580]     Audit-Id: 4184ab00-6919-4020-8c8d-3cba5983c701
	I0131 02:41:14.122178 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:41:14.122537 1436700 pod_ready.go:92] pod "kube-controller-manager-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:14.122558 1436700 pod_ready.go:81] duration metric: took 4.478732ms waiting for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.122569 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.291942 1436700 request.go:629] Waited for 169.308604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:41:14.292008 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:41:14.292013 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.292021 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.292027 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.294641 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:14.294676 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.294688 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.294697 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.294705 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.294713 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.294738 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.294749 1436700 round_trippers.go:580]     Audit-Id: c5b06ffd-2a70-49b1-84c2-726dde33c19e
	I0131 02:41:14.294971 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mpxjh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3a11b226-7a8e-4b25-a409-acc439d4bdfb","resourceVersion":"759","creationTimestamp":"2024-01-31T02:30:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0131 02:41:14.491698 1436700 request.go:629] Waited for 196.237558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:41:14.491808 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:41:14.491816 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.491829 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.491838 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.495099 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:41:14.495127 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.495137 1436700 round_trippers.go:580]     Audit-Id: 0920dc8b-ec98-4648-9221-47baea6ec109
	I0131 02:41:14.495147 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.495158 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.495166 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.495174 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.495181 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.495330 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"5d8d8dfa-72be-4459-b7bc-217aef0cc608","resourceVersion":"1090","creationTimestamp":"2024-01-31T02:31:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_41_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:31:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3965 chars]
	I0131 02:41:14.495718 1436700 pod_ready.go:92] pod "kube-proxy-mpxjh" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:14.495748 1436700 pod_ready.go:81] duration metric: took 373.167799ms waiting for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.495764 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:14.691711 1436700 request.go:629] Waited for 195.836239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:41:14.691817 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:41:14.691830 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.691843 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.691855 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.695322 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:41:14.695354 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.695366 1436700 round_trippers.go:580]     Audit-Id: ddf39ea2-2527-4413-a1ec-bffcc8ecb192
	I0131 02:41:14.695375 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.695384 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.695395 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.695403 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.695411 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.695591 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x5jb7","generateName":"kube-proxy-","namespace":"kube-system","uid":"4dc3dae9-7781-4832-88ba-08a17ecfe557","resourceVersion":"1094","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0131 02:41:14.891575 1436700 request.go:629] Waited for 195.413171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:41:14.891652 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:41:14.891659 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:14.891667 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:14.891683 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:14.894617 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:14.894645 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:14.894656 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:14 GMT
	I0131 02:41:14.894665 1436700 round_trippers.go:580]     Audit-Id: c45d3a77-5597-4ebd-ae4a-f3cdaf9e87e6
	I0131 02:41:14.894673 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:14.894681 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:14.894694 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:14.894709 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:14.895008 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m02","uid":"a3f4c8e0-3882-4367-a171-122b15d899d3","resourceVersion":"1089","creationTimestamp":"2024-01-31T02:41:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_41_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:41:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:41:15.091587 1436700 request.go:629] Waited for 95.272825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:41:15.091673 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:41:15.091685 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:15.091712 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:15.091727 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:15.094689 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:15.094734 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:15.094743 1436700 round_trippers.go:580]     Audit-Id: b86f7692-12ce-4929-996e-10d595c9990c
	I0131 02:41:15.094752 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:15.094759 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:15.094779 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:15.094791 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:15.094800 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:15 GMT
	I0131 02:41:15.095174 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x5jb7","generateName":"kube-proxy-","namespace":"kube-system","uid":"4dc3dae9-7781-4832-88ba-08a17ecfe557","resourceVersion":"1109","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0131 02:41:15.292020 1436700 request.go:629] Waited for 196.362731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:41:15.292109 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:41:15.292115 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:15.292123 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:15.292131 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:15.294611 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:15.294632 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:15.294639 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:15 GMT
	I0131 02:41:15.294644 1436700 round_trippers.go:580]     Audit-Id: 38fcd9e2-7751-4b7f-bec9-4b7a16ed91c0
	I0131 02:41:15.294651 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:15.294659 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:15.294668 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:15.294680 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:15.294854 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m02","uid":"a3f4c8e0-3882-4367-a171-122b15d899d3","resourceVersion":"1089","creationTimestamp":"2024-01-31T02:41:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_41_13_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:41:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:41:15.295144 1436700 pod_ready.go:92] pod "kube-proxy-x5jb7" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:15.295160 1436700 pod_ready.go:81] duration metric: took 799.382874ms waiting for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:15.295170 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:15.491753 1436700 request.go:629] Waited for 196.478713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:41:15.491820 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:41:15.491825 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:15.491833 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:15.491840 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:15.494669 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:15.494693 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:15.494700 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:15.494705 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:15.494711 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:15.494716 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:15 GMT
	I0131 02:41:15.494721 1436700 round_trippers.go:580]     Audit-Id: 516aa245-1622-43bf-a338-d80cfdb6a06b
	I0131 02:41:15.494726 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:15.495112 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x85lz","generateName":"kube-proxy-","namespace":"kube-system","uid":"36e014b9-154e-43f4-b694-7f05bd31baef","resourceVersion":"837","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0131 02:41:15.692043 1436700 request.go:629] Waited for 196.393477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:15.692116 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:15.692126 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:15.692133 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:15.692139 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:15.694606 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:15.694628 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:15.694635 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:15.694640 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:15 GMT
	I0131 02:41:15.694646 1436700 round_trippers.go:580]     Audit-Id: dcdf9bdc-5ebf-453c-9526-070a683678f8
	I0131 02:41:15.694650 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:15.694655 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:15.694660 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:15.694996 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:41:15.695321 1436700 pod_ready.go:92] pod "kube-proxy-x85lz" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:15.695339 1436700 pod_ready.go:81] duration metric: took 400.163632ms waiting for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:15.695348 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:15.891448 1436700 request.go:629] Waited for 196.030937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:41:15.891540 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:41:15.891545 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:15.891553 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:15.891560 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:15.894291 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:15.894314 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:15.894321 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:15.894327 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:15 GMT
	I0131 02:41:15.894333 1436700 round_trippers.go:580]     Audit-Id: 3c565fba-9641-499e-b79d-96eadad04040
	I0131 02:41:15.894338 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:15.894344 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:15.894349 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:15.894687 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-263108","namespace":"kube-system","uid":"7cc8534f-0f2b-457e-9942-e49d0f507875","resourceVersion":"941","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.mirror":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.seen":"2024-01-31T02:28:18.078207038Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0131 02:41:16.091355 1436700 request.go:629] Waited for 196.270089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:16.091419 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:41:16.091429 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:16.091437 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:16.091443 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:16.094095 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:16.094122 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:16.094132 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:16.094141 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:16.094149 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:16 GMT
	I0131 02:41:16.094157 1436700 round_trippers.go:580]     Audit-Id: 49158dc4-62d9-4ffa-91fe-74e708254390
	I0131 02:41:16.094166 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:16.094175 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:16.094332 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:41:16.094771 1436700 pod_ready.go:92] pod "kube-scheduler-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:41:16.094792 1436700 pod_ready.go:81] duration metric: took 399.437896ms waiting for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:41:16.094802 1436700 pod_ready.go:38] duration metric: took 2.000532527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:41:16.094823 1436700 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 02:41:16.094870 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:41:16.108957 1436700 system_svc.go:56] duration metric: took 14.122832ms WaitForService to wait for kubelet.
	I0131 02:41:16.108989 1436700 kubeadm.go:581] duration metric: took 2.036431716s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 02:41:16.109014 1436700 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:41:16.291694 1436700 request.go:629] Waited for 182.594575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0131 02:41:16.291789 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0131 02:41:16.291794 1436700 round_trippers.go:469] Request Headers:
	I0131 02:41:16.291802 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:41:16.291811 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:41:16.294373 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:41:16.294387 1436700 round_trippers.go:577] Response Headers:
	I0131 02:41:16.294394 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:41:16.294400 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:41:16 GMT
	I0131 02:41:16.294408 1436700 round_trippers.go:580]     Audit-Id: 4c1d47a1-e2e5-4163-8207-ad417680ddb6
	I0131 02:41:16.294413 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:41:16.294426 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:41:16.294437 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:41:16.294911 1436700 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1111"},"items":[{"metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16438 chars]
	I0131 02:41:16.295543 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:41:16.295564 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:41:16.295573 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:41:16.295577 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:41:16.295581 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:41:16.295584 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:41:16.295588 1436700 node_conditions.go:105] duration metric: took 186.570021ms to run NodePressure ...
	I0131 02:41:16.295603 1436700 start.go:228] waiting for startup goroutines ...
	I0131 02:41:16.295634 1436700 start.go:242] writing updated cluster config ...
	I0131 02:41:16.296080 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:41:16.296179 1436700 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/config.json ...
	I0131 02:41:16.298864 1436700 out.go:177] * Starting worker node multinode-263108-m03 in cluster multinode-263108
	I0131 02:41:16.300532 1436700 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:41:16.300555 1436700 cache.go:56] Caching tarball of preloaded images
	I0131 02:41:16.300656 1436700 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 02:41:16.300668 1436700 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 02:41:16.300773 1436700 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/config.json ...
	I0131 02:41:16.300980 1436700 start.go:365] acquiring machines lock for multinode-263108-m03: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:41:16.301031 1436700 start.go:369] acquired machines lock for "multinode-263108-m03" in 29.92µs
	I0131 02:41:16.301047 1436700 start.go:96] Skipping create...Using existing machine configuration
	I0131 02:41:16.301058 1436700 fix.go:54] fixHost starting: m03
	I0131 02:41:16.301352 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:41:16.301378 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:41:16.316164 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0131 02:41:16.316568 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:41:16.317005 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:41:16.317022 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:41:16.317343 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:41:16.317557 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:41:16.317716 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetState
	I0131 02:41:16.319295 1436700 fix.go:102] recreateIfNeeded on multinode-263108-m03: state=Running err=<nil>
	W0131 02:41:16.319316 1436700 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 02:41:16.322427 1436700 out.go:177] * Updating the running kvm2 "multinode-263108-m03" VM ...
	I0131 02:41:16.323713 1436700 machine.go:88] provisioning docker machine ...
	I0131 02:41:16.323733 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:41:16.323960 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetMachineName
	I0131 02:41:16.324147 1436700 buildroot.go:166] provisioning hostname "multinode-263108-m03"
	I0131 02:41:16.324164 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetMachineName
	I0131 02:41:16.324275 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:41:16.326453 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.326886 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:41:16.326918 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.327016 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:41:16.327208 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.327368 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.327493 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:41:16.327636 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:41:16.328079 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0131 02:41:16.328098 1436700 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-263108-m03 && echo "multinode-263108-m03" | sudo tee /etc/hostname
	I0131 02:41:16.451056 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263108-m03
	
	I0131 02:41:16.451094 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:41:16.453683 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.454129 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:41:16.454164 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.454294 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:41:16.454509 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.454690 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.454883 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:41:16.455054 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:41:16.455416 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0131 02:41:16.455435 1436700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-263108-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-263108-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-263108-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 02:41:16.563231 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:41:16.563263 1436700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 02:41:16.563278 1436700 buildroot.go:174] setting up certificates
	I0131 02:41:16.563289 1436700 provision.go:83] configureAuth start
	I0131 02:41:16.563298 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetMachineName
	I0131 02:41:16.563610 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetIP
	I0131 02:41:16.566499 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.566907 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:41:16.566944 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.567098 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:41:16.569876 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.570328 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:41:16.570357 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.570562 1436700 provision.go:138] copyHostCerts
	I0131 02:41:16.570623 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:41:16.570663 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 02:41:16.570676 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:41:16.570781 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 02:41:16.570881 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:41:16.570908 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 02:41:16.570918 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:41:16.570951 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 02:41:16.570999 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:41:16.571014 1436700 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 02:41:16.571020 1436700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:41:16.571045 1436700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 02:41:16.571104 1436700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.multinode-263108-m03 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube multinode-263108-m03]
	I0131 02:41:16.644646 1436700 provision.go:172] copyRemoteCerts
	I0131 02:41:16.644716 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 02:41:16.644742 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:41:16.647702 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.648017 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:41:16.648042 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.648223 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:41:16.648439 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.648628 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:41:16.648783 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m03/id_rsa Username:docker}
	I0131 02:41:16.731387 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0131 02:41:16.731466 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 02:41:16.752578 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0131 02:41:16.752644 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0131 02:41:16.772908 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0131 02:41:16.772985 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 02:41:16.793249 1436700 provision.go:86] duration metric: configureAuth took 229.945565ms
	I0131 02:41:16.793278 1436700 buildroot.go:189] setting minikube options for container-runtime
	I0131 02:41:16.793494 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:41:16.793589 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:41:16.796383 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.796905 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:41:16.796934 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:41:16.797142 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:41:16.797387 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.797623 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:41:16.797815 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:41:16.798057 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:41:16.798363 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0131 02:41:16.798381 1436700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 02:42:47.354460 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 02:42:47.354556 1436700 machine.go:91] provisioned docker machine in 1m31.030826575s
	I0131 02:42:47.354602 1436700 start.go:300] post-start starting for "multinode-263108-m03" (driver="kvm2")
	I0131 02:42:47.354623 1436700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 02:42:47.354659 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:42:47.355014 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 02:42:47.355058 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:42:47.357734 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.358177 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:42:47.358214 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.358424 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:42:47.358681 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:42:47.358882 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:42:47.359044 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m03/id_rsa Username:docker}
	I0131 02:42:47.443703 1436700 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 02:42:47.447374 1436700 command_runner.go:130] > NAME=Buildroot
	I0131 02:42:47.447400 1436700 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0131 02:42:47.447407 1436700 command_runner.go:130] > ID=buildroot
	I0131 02:42:47.447416 1436700 command_runner.go:130] > VERSION_ID=2021.02.12
	I0131 02:42:47.447424 1436700 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0131 02:42:47.447576 1436700 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 02:42:47.447634 1436700 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 02:42:47.447714 1436700 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 02:42:47.447814 1436700 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 02:42:47.447826 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /etc/ssl/certs/14199762.pem
	I0131 02:42:47.447957 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 02:42:47.455833 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:42:47.477073 1436700 start.go:303] post-start completed in 122.448625ms
	I0131 02:42:47.477106 1436700 fix.go:56] fixHost completed within 1m31.17604726s
	I0131 02:42:47.477134 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:42:47.479915 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.480304 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:42:47.480337 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.480485 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:42:47.480725 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:42:47.480870 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:42:47.481015 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:42:47.481213 1436700 main.go:141] libmachine: Using SSH client type: native
	I0131 02:42:47.481544 1436700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0131 02:42:47.481558 1436700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 02:42:47.591071 1436700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706668967.583072115
	
	I0131 02:42:47.591100 1436700 fix.go:206] guest clock: 1706668967.583072115
	I0131 02:42:47.591107 1436700 fix.go:219] Guest: 2024-01-31 02:42:47.583072115 +0000 UTC Remote: 2024-01-31 02:42:47.477111036 +0000 UTC m=+551.549585457 (delta=105.961079ms)
	I0131 02:42:47.591127 1436700 fix.go:190] guest clock delta is within tolerance: 105.961079ms
	I0131 02:42:47.591133 1436700 start.go:83] releasing machines lock for "multinode-263108-m03", held for 1m31.290092815s
	I0131 02:42:47.591161 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:42:47.591431 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetIP
	I0131 02:42:47.594056 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.594469 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:42:47.594513 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.596486 1436700 out.go:177] * Found network options:
	I0131 02:42:47.597852 1436700 out.go:177]   - NO_PROXY=192.168.39.109,192.168.39.60
	W0131 02:42:47.599259 1436700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0131 02:42:47.599283 1436700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0131 02:42:47.599297 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:42:47.599904 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:42:47.600096 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .DriverName
	I0131 02:42:47.600220 1436700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 02:42:47.600271 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	W0131 02:42:47.600303 1436700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0131 02:42:47.600326 1436700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0131 02:42:47.600391 1436700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 02:42:47.600411 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHHostname
	I0131 02:42:47.603018 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.603310 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.603500 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:42:47.603536 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.603638 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:42:47.603793 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:42:47.603814 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:47.603827 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:42:47.604017 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHPort
	I0131 02:42:47.604031 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:42:47.604206 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHKeyPath
	I0131 02:42:47.604229 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m03/id_rsa Username:docker}
	I0131 02:42:47.604356 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetSSHUsername
	I0131 02:42:47.604513 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m03/id_rsa Username:docker}
	I0131 02:42:47.830562 1436700 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0131 02:42:47.830697 1436700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0131 02:42:47.836162 1436700 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0131 02:42:47.836231 1436700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 02:42:47.836365 1436700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 02:42:47.844178 1436700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0131 02:42:47.844212 1436700 start.go:475] detecting cgroup driver to use...
	I0131 02:42:47.844279 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 02:42:47.856704 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 02:42:47.869075 1436700 docker.go:217] disabling cri-docker service (if available) ...
	I0131 02:42:47.869144 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 02:42:47.882514 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 02:42:47.894149 1436700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 02:42:48.030643 1436700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 02:42:48.162302 1436700 docker.go:233] disabling docker service ...
	I0131 02:42:48.162382 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 02:42:48.211925 1436700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 02:42:48.242511 1436700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 02:42:48.478459 1436700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 02:42:48.611142 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 02:42:48.625912 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 02:42:48.643084 1436700 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0131 02:42:48.643521 1436700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 02:42:48.643587 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:42:48.654056 1436700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 02:42:48.654136 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:42:48.663761 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:42:48.677140 1436700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:42:48.687778 1436700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 02:42:48.698144 1436700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 02:42:48.706957 1436700 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0131 02:42:48.707187 1436700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 02:42:48.716266 1436700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 02:42:48.847392 1436700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 02:42:51.329050 1436700 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.481609925s)
	I0131 02:42:51.329092 1436700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 02:42:51.329151 1436700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 02:42:51.334717 1436700 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0131 02:42:51.334746 1436700 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0131 02:42:51.334754 1436700 command_runner.go:130] > Device: 16h/22d	Inode: 1176        Links: 1
	I0131 02:42:51.334761 1436700 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0131 02:42:51.334766 1436700 command_runner.go:130] > Access: 2024-01-31 02:42:51.239215523 +0000
	I0131 02:42:51.334772 1436700 command_runner.go:130] > Modify: 2024-01-31 02:42:51.239215523 +0000
	I0131 02:42:51.334777 1436700 command_runner.go:130] > Change: 2024-01-31 02:42:51.239215523 +0000
	I0131 02:42:51.334782 1436700 command_runner.go:130] >  Birth: -
	I0131 02:42:51.334815 1436700 start.go:543] Will wait 60s for crictl version
	I0131 02:42:51.334929 1436700 ssh_runner.go:195] Run: which crictl
	I0131 02:42:51.338836 1436700 command_runner.go:130] > /usr/bin/crictl
	I0131 02:42:51.338900 1436700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 02:42:51.381275 1436700 command_runner.go:130] > Version:  0.1.0
	I0131 02:42:51.381308 1436700 command_runner.go:130] > RuntimeName:  cri-o
	I0131 02:42:51.381356 1436700 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0131 02:42:51.381463 1436700 command_runner.go:130] > RuntimeApiVersion:  v1
	I0131 02:42:51.382852 1436700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 02:42:51.382926 1436700 ssh_runner.go:195] Run: crio --version
	I0131 02:42:51.428206 1436700 command_runner.go:130] > crio version 1.24.1
	I0131 02:42:51.428228 1436700 command_runner.go:130] > Version:          1.24.1
	I0131 02:42:51.428235 1436700 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0131 02:42:51.428239 1436700 command_runner.go:130] > GitTreeState:     dirty
	I0131 02:42:51.428246 1436700 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0131 02:42:51.428252 1436700 command_runner.go:130] > GoVersion:        go1.19.9
	I0131 02:42:51.428260 1436700 command_runner.go:130] > Compiler:         gc
	I0131 02:42:51.428266 1436700 command_runner.go:130] > Platform:         linux/amd64
	I0131 02:42:51.428274 1436700 command_runner.go:130] > Linkmode:         dynamic
	I0131 02:42:51.428290 1436700 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0131 02:42:51.428302 1436700 command_runner.go:130] > SeccompEnabled:   true
	I0131 02:42:51.428306 1436700 command_runner.go:130] > AppArmorEnabled:  false
	I0131 02:42:51.428490 1436700 ssh_runner.go:195] Run: crio --version
	I0131 02:42:51.473944 1436700 command_runner.go:130] > crio version 1.24.1
	I0131 02:42:51.473974 1436700 command_runner.go:130] > Version:          1.24.1
	I0131 02:42:51.473981 1436700 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0131 02:42:51.473986 1436700 command_runner.go:130] > GitTreeState:     dirty
	I0131 02:42:51.473994 1436700 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0131 02:42:51.473999 1436700 command_runner.go:130] > GoVersion:        go1.19.9
	I0131 02:42:51.474003 1436700 command_runner.go:130] > Compiler:         gc
	I0131 02:42:51.474008 1436700 command_runner.go:130] > Platform:         linux/amd64
	I0131 02:42:51.474013 1436700 command_runner.go:130] > Linkmode:         dynamic
	I0131 02:42:51.474020 1436700 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0131 02:42:51.474024 1436700 command_runner.go:130] > SeccompEnabled:   true
	I0131 02:42:51.474029 1436700 command_runner.go:130] > AppArmorEnabled:  false
	I0131 02:42:51.477429 1436700 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 02:42:51.478840 1436700 out.go:177]   - env NO_PROXY=192.168.39.109
	I0131 02:42:51.480152 1436700 out.go:177]   - env NO_PROXY=192.168.39.109,192.168.39.60
	I0131 02:42:51.481401 1436700 main.go:141] libmachine: (multinode-263108-m03) Calling .GetIP
	I0131 02:42:51.483956 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:51.484300 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:16:b0", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:31:17 +0000 UTC Type:0 Mac:52:54:00:b8:16:b0 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:multinode-263108-m03 Clientid:01:52:54:00:b8:16:b0}
	I0131 02:42:51.484332 1436700 main.go:141] libmachine: (multinode-263108-m03) DBG | domain multinode-263108-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:b8:16:b0 in network mk-multinode-263108
	I0131 02:42:51.484550 1436700 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 02:42:51.488406 1436700 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0131 02:42:51.488619 1436700 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108 for IP: 192.168.39.84
	I0131 02:42:51.488640 1436700 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:42:51.488786 1436700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 02:42:51.488825 1436700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 02:42:51.488837 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0131 02:42:51.488853 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0131 02:42:51.488872 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0131 02:42:51.488889 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0131 02:42:51.488946 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 02:42:51.488987 1436700 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 02:42:51.489003 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 02:42:51.489028 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 02:42:51.489064 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 02:42:51.489095 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 02:42:51.489151 1436700 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:42:51.489189 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:42:51.489211 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem -> /usr/share/ca-certificates/1419976.pem
	I0131 02:42:51.489230 1436700 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> /usr/share/ca-certificates/14199762.pem
	I0131 02:42:51.489679 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 02:42:51.512969 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 02:42:51.535548 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 02:42:51.557715 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 02:42:51.578324 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 02:42:51.598993 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 02:42:51.619381 1436700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 02:42:51.640235 1436700 ssh_runner.go:195] Run: openssl version
	I0131 02:42:51.645548 1436700 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0131 02:42:51.645794 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 02:42:51.655685 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:42:51.659965 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:42:51.660006 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:42:51.660060 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:42:51.665107 1436700 command_runner.go:130] > b5213941
	I0131 02:42:51.665173 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 02:42:51.673288 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 02:42:51.683544 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 02:42:51.688472 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:42:51.688636 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:42:51.688698 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 02:42:51.693911 1436700 command_runner.go:130] > 51391683
	I0131 02:42:51.694246 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 02:42:51.702533 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 02:42:51.712178 1436700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 02:42:51.715971 1436700 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:42:51.716195 1436700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:42:51.716251 1436700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 02:42:51.721114 1436700 command_runner.go:130] > 3ec20f2e
	I0131 02:42:51.721174 1436700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 02:42:51.729729 1436700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 02:42:51.733456 1436700 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 02:42:51.733656 1436700 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0131 02:42:51.733745 1436700 ssh_runner.go:195] Run: crio config
	I0131 02:42:51.785422 1436700 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0131 02:42:51.785450 1436700 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0131 02:42:51.785457 1436700 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0131 02:42:51.785461 1436700 command_runner.go:130] > #
	I0131 02:42:51.785473 1436700 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0131 02:42:51.785485 1436700 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0131 02:42:51.785496 1436700 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0131 02:42:51.785503 1436700 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0131 02:42:51.785508 1436700 command_runner.go:130] > # reload'.
	I0131 02:42:51.785514 1436700 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0131 02:42:51.785525 1436700 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0131 02:42:51.785532 1436700 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0131 02:42:51.785538 1436700 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0131 02:42:51.785542 1436700 command_runner.go:130] > [crio]
	I0131 02:42:51.785550 1436700 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0131 02:42:51.785559 1436700 command_runner.go:130] > # containers images, in this directory.
	I0131 02:42:51.785878 1436700 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0131 02:42:51.785916 1436700 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0131 02:42:51.786032 1436700 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0131 02:42:51.786050 1436700 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0131 02:42:51.786058 1436700 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0131 02:42:51.786186 1436700 command_runner.go:130] > storage_driver = "overlay"
	I0131 02:42:51.786206 1436700 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0131 02:42:51.786217 1436700 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0131 02:42:51.786228 1436700 command_runner.go:130] > storage_option = [
	I0131 02:42:51.786545 1436700 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0131 02:42:51.786616 1436700 command_runner.go:130] > ]
	I0131 02:42:51.786636 1436700 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0131 02:42:51.786645 1436700 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0131 02:42:51.786978 1436700 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0131 02:42:51.786990 1436700 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0131 02:42:51.786996 1436700 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0131 02:42:51.787001 1436700 command_runner.go:130] > # always happen on a node reboot
	I0131 02:42:51.787406 1436700 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0131 02:42:51.787425 1436700 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0131 02:42:51.787434 1436700 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0131 02:42:51.787450 1436700 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0131 02:42:51.787742 1436700 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0131 02:42:51.787754 1436700 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0131 02:42:51.787762 1436700 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0131 02:42:51.788059 1436700 command_runner.go:130] > # internal_wipe = true
	I0131 02:42:51.788074 1436700 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0131 02:42:51.788084 1436700 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0131 02:42:51.788097 1436700 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0131 02:42:51.788457 1436700 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0131 02:42:51.788470 1436700 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0131 02:42:51.788478 1436700 command_runner.go:130] > [crio.api]
	I0131 02:42:51.788483 1436700 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0131 02:42:51.788837 1436700 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0131 02:42:51.788851 1436700 command_runner.go:130] > # IP address on which the stream server will listen.
	I0131 02:42:51.789180 1436700 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0131 02:42:51.789202 1436700 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0131 02:42:51.789211 1436700 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0131 02:42:51.789484 1436700 command_runner.go:130] > # stream_port = "0"
	I0131 02:42:51.789494 1436700 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0131 02:42:51.789499 1436700 command_runner.go:130] > # stream_enable_tls = false
	I0131 02:42:51.789505 1436700 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0131 02:42:51.789509 1436700 command_runner.go:130] > # stream_idle_timeout = ""
	I0131 02:42:51.789515 1436700 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0131 02:42:51.789524 1436700 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0131 02:42:51.789528 1436700 command_runner.go:130] > # minutes.
	I0131 02:42:51.789534 1436700 command_runner.go:130] > # stream_tls_cert = ""
	I0131 02:42:51.789541 1436700 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0131 02:42:51.789552 1436700 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0131 02:42:51.789558 1436700 command_runner.go:130] > # stream_tls_key = ""
	I0131 02:42:51.789564 1436700 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0131 02:42:51.789573 1436700 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0131 02:42:51.789579 1436700 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0131 02:42:51.789628 1436700 command_runner.go:130] > # stream_tls_ca = ""
	I0131 02:42:51.789649 1436700 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0131 02:42:51.789662 1436700 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0131 02:42:51.789676 1436700 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0131 02:42:51.789688 1436700 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0131 02:42:51.789717 1436700 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0131 02:42:51.789730 1436700 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0131 02:42:51.789741 1436700 command_runner.go:130] > [crio.runtime]
	I0131 02:42:51.789751 1436700 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0131 02:42:51.789764 1436700 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0131 02:42:51.789771 1436700 command_runner.go:130] > # "nofile=1024:2048"
	I0131 02:42:51.789781 1436700 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0131 02:42:51.789792 1436700 command_runner.go:130] > # default_ulimits = [
	I0131 02:42:51.789799 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.789812 1436700 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0131 02:42:51.789823 1436700 command_runner.go:130] > # no_pivot = false
	I0131 02:42:51.789833 1436700 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0131 02:42:51.789849 1436700 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0131 02:42:51.789860 1436700 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0131 02:42:51.789870 1436700 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0131 02:42:51.789882 1436700 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0131 02:42:51.789897 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0131 02:42:51.789908 1436700 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0131 02:42:51.789917 1436700 command_runner.go:130] > # Cgroup setting for conmon
	I0131 02:42:51.789931 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0131 02:42:51.789941 1436700 command_runner.go:130] > conmon_cgroup = "pod"
	I0131 02:42:51.789955 1436700 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0131 02:42:51.789968 1436700 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0131 02:42:51.789983 1436700 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0131 02:42:51.789992 1436700 command_runner.go:130] > conmon_env = [
	I0131 02:42:51.790003 1436700 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0131 02:42:51.790012 1436700 command_runner.go:130] > ]
	I0131 02:42:51.790028 1436700 command_runner.go:130] > # Additional environment variables to set for all the
	I0131 02:42:51.790039 1436700 command_runner.go:130] > # containers. These are overridden if set in the
	I0131 02:42:51.790051 1436700 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0131 02:42:51.790061 1436700 command_runner.go:130] > # default_env = [
	I0131 02:42:51.790074 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790087 1436700 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0131 02:42:51.790099 1436700 command_runner.go:130] > # selinux = false
	I0131 02:42:51.790113 1436700 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0131 02:42:51.790127 1436700 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0131 02:42:51.790140 1436700 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0131 02:42:51.790150 1436700 command_runner.go:130] > # seccomp_profile = ""
	I0131 02:42:51.790160 1436700 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0131 02:42:51.790173 1436700 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0131 02:42:51.790187 1436700 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0131 02:42:51.790198 1436700 command_runner.go:130] > # which might increase security.
	I0131 02:42:51.790209 1436700 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0131 02:42:51.790217 1436700 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0131 02:42:51.790228 1436700 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0131 02:42:51.790236 1436700 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0131 02:42:51.790244 1436700 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0131 02:42:51.790249 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:42:51.790258 1436700 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0131 02:42:51.790264 1436700 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0131 02:42:51.790270 1436700 command_runner.go:130] > # the cgroup blockio controller.
	I0131 02:42:51.790275 1436700 command_runner.go:130] > # blockio_config_file = ""
	I0131 02:42:51.790284 1436700 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0131 02:42:51.790289 1436700 command_runner.go:130] > # irqbalance daemon.
	I0131 02:42:51.790297 1436700 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0131 02:42:51.790310 1436700 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0131 02:42:51.790323 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:42:51.790333 1436700 command_runner.go:130] > # rdt_config_file = ""
	I0131 02:42:51.790345 1436700 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0131 02:42:51.790351 1436700 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0131 02:42:51.790364 1436700 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0131 02:42:51.790375 1436700 command_runner.go:130] > # separate_pull_cgroup = ""
	I0131 02:42:51.790386 1436700 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0131 02:42:51.790396 1436700 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0131 02:42:51.790405 1436700 command_runner.go:130] > # will be added.
	I0131 02:42:51.790413 1436700 command_runner.go:130] > # default_capabilities = [
	I0131 02:42:51.790422 1436700 command_runner.go:130] > # 	"CHOWN",
	I0131 02:42:51.790428 1436700 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0131 02:42:51.790437 1436700 command_runner.go:130] > # 	"FSETID",
	I0131 02:42:51.790444 1436700 command_runner.go:130] > # 	"FOWNER",
	I0131 02:42:51.790453 1436700 command_runner.go:130] > # 	"SETGID",
	I0131 02:42:51.790458 1436700 command_runner.go:130] > # 	"SETUID",
	I0131 02:42:51.790468 1436700 command_runner.go:130] > # 	"SETPCAP",
	I0131 02:42:51.790475 1436700 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0131 02:42:51.790501 1436700 command_runner.go:130] > # 	"KILL",
	I0131 02:42:51.790510 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790521 1436700 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0131 02:42:51.790533 1436700 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0131 02:42:51.790544 1436700 command_runner.go:130] > # default_sysctls = [
	I0131 02:42:51.790553 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790562 1436700 command_runner.go:130] > # List of devices on the host that a
	I0131 02:42:51.790575 1436700 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0131 02:42:51.790583 1436700 command_runner.go:130] > # allowed_devices = [
	I0131 02:42:51.790591 1436700 command_runner.go:130] > # 	"/dev/fuse",
	I0131 02:42:51.790599 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790607 1436700 command_runner.go:130] > # List of additional devices. specified as
	I0131 02:42:51.790615 1436700 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0131 02:42:51.790623 1436700 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0131 02:42:51.790640 1436700 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0131 02:42:51.790651 1436700 command_runner.go:130] > # additional_devices = [
	I0131 02:42:51.790656 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790664 1436700 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0131 02:42:51.790671 1436700 command_runner.go:130] > # cdi_spec_dirs = [
	I0131 02:42:51.790676 1436700 command_runner.go:130] > # 	"/etc/cdi",
	I0131 02:42:51.790682 1436700 command_runner.go:130] > # 	"/var/run/cdi",
	I0131 02:42:51.790688 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790700 1436700 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0131 02:42:51.790719 1436700 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0131 02:42:51.790730 1436700 command_runner.go:130] > # Defaults to false.
	I0131 02:42:51.790741 1436700 command_runner.go:130] > # device_ownership_from_security_context = false
	I0131 02:42:51.790756 1436700 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0131 02:42:51.790770 1436700 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0131 02:42:51.790781 1436700 command_runner.go:130] > # hooks_dir = [
	I0131 02:42:51.790792 1436700 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0131 02:42:51.790799 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.790813 1436700 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0131 02:42:51.790828 1436700 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0131 02:42:51.790840 1436700 command_runner.go:130] > # its default mounts from the following two files:
	I0131 02:42:51.790849 1436700 command_runner.go:130] > #
	I0131 02:42:51.790864 1436700 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0131 02:42:51.790878 1436700 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0131 02:42:51.790891 1436700 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0131 02:42:51.790900 1436700 command_runner.go:130] > #
	I0131 02:42:51.790912 1436700 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0131 02:42:51.790927 1436700 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0131 02:42:51.790941 1436700 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0131 02:42:51.790953 1436700 command_runner.go:130] > #      only add mounts it finds in this file.
	I0131 02:42:51.790962 1436700 command_runner.go:130] > #
	I0131 02:42:51.790971 1436700 command_runner.go:130] > # default_mounts_file = ""
	I0131 02:42:51.790983 1436700 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0131 02:42:51.790997 1436700 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0131 02:42:51.791007 1436700 command_runner.go:130] > pids_limit = 1024
	I0131 02:42:51.791020 1436700 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0131 02:42:51.791036 1436700 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0131 02:42:51.791051 1436700 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0131 02:42:51.791068 1436700 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0131 02:42:51.791078 1436700 command_runner.go:130] > # log_size_max = -1
	I0131 02:42:51.791091 1436700 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0131 02:42:51.791102 1436700 command_runner.go:130] > # log_to_journald = false
	I0131 02:42:51.791116 1436700 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0131 02:42:51.791128 1436700 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0131 02:42:51.791141 1436700 command_runner.go:130] > # Path to directory for container attach sockets.
	I0131 02:42:51.791153 1436700 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0131 02:42:51.791166 1436700 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0131 02:42:51.791176 1436700 command_runner.go:130] > # bind_mount_prefix = ""
	I0131 02:42:51.791186 1436700 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0131 02:42:51.791197 1436700 command_runner.go:130] > # read_only = false
	I0131 02:42:51.791210 1436700 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0131 02:42:51.791222 1436700 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0131 02:42:51.791233 1436700 command_runner.go:130] > # live configuration reload.
	I0131 02:42:51.791241 1436700 command_runner.go:130] > # log_level = "info"
	I0131 02:42:51.791255 1436700 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0131 02:42:51.791267 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:42:51.791278 1436700 command_runner.go:130] > # log_filter = ""
	I0131 02:42:51.791292 1436700 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0131 02:42:51.791305 1436700 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0131 02:42:51.791314 1436700 command_runner.go:130] > # separated by comma.
	I0131 02:42:51.791324 1436700 command_runner.go:130] > # uid_mappings = ""
	I0131 02:42:51.791339 1436700 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0131 02:42:51.791353 1436700 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0131 02:42:51.791363 1436700 command_runner.go:130] > # separated by comma.
	I0131 02:42:51.791372 1436700 command_runner.go:130] > # gid_mappings = ""
	I0131 02:42:51.791386 1436700 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0131 02:42:51.791401 1436700 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0131 02:42:51.791414 1436700 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0131 02:42:51.791425 1436700 command_runner.go:130] > # minimum_mappable_uid = -1
	I0131 02:42:51.791441 1436700 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0131 02:42:51.791455 1436700 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0131 02:42:51.791469 1436700 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0131 02:42:51.791480 1436700 command_runner.go:130] > # minimum_mappable_gid = -1
	I0131 02:42:51.791494 1436700 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0131 02:42:51.791509 1436700 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0131 02:42:51.791521 1436700 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0131 02:42:51.791528 1436700 command_runner.go:130] > # ctr_stop_timeout = 30
	I0131 02:42:51.791542 1436700 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0131 02:42:51.791556 1436700 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0131 02:42:51.791568 1436700 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0131 02:42:51.791580 1436700 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0131 02:42:51.791590 1436700 command_runner.go:130] > drop_infra_ctr = false
	I0131 02:42:51.791601 1436700 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0131 02:42:51.791614 1436700 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0131 02:42:51.791630 1436700 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0131 02:42:51.791641 1436700 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0131 02:42:51.791653 1436700 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0131 02:42:51.791665 1436700 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0131 02:42:51.791676 1436700 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0131 02:42:51.791692 1436700 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0131 02:42:51.791703 1436700 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0131 02:42:51.791721 1436700 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0131 02:42:51.791736 1436700 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0131 02:42:51.791751 1436700 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0131 02:42:51.791762 1436700 command_runner.go:130] > # default_runtime = "runc"
	I0131 02:42:51.791772 1436700 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0131 02:42:51.791788 1436700 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0131 02:42:51.791807 1436700 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0131 02:42:51.791820 1436700 command_runner.go:130] > # creation as a file is not desired either.
	I0131 02:42:51.791837 1436700 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0131 02:42:51.791849 1436700 command_runner.go:130] > # the hostname is being managed dynamically.
	I0131 02:42:51.791861 1436700 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0131 02:42:51.791869 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.791882 1436700 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0131 02:42:51.791896 1436700 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0131 02:42:51.791912 1436700 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0131 02:42:51.791927 1436700 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0131 02:42:51.791936 1436700 command_runner.go:130] > #
	I0131 02:42:51.791945 1436700 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0131 02:42:51.791956 1436700 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0131 02:42:51.791964 1436700 command_runner.go:130] > #  runtime_type = "oci"
	I0131 02:42:51.791976 1436700 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0131 02:42:51.791988 1436700 command_runner.go:130] > #  privileged_without_host_devices = false
	I0131 02:42:51.792001 1436700 command_runner.go:130] > #  allowed_annotations = []
	I0131 02:42:51.792011 1436700 command_runner.go:130] > # Where:
	I0131 02:42:51.792022 1436700 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0131 02:42:51.792036 1436700 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0131 02:42:51.792050 1436700 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0131 02:42:51.792065 1436700 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0131 02:42:51.792075 1436700 command_runner.go:130] > #   in $PATH.
	I0131 02:42:51.792089 1436700 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0131 02:42:51.792101 1436700 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0131 02:42:51.792115 1436700 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0131 02:42:51.792125 1436700 command_runner.go:130] > #   state.
	I0131 02:42:51.792137 1436700 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0131 02:42:51.792151 1436700 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0131 02:42:51.792166 1436700 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0131 02:42:51.792179 1436700 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0131 02:42:51.792193 1436700 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0131 02:42:51.792207 1436700 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0131 02:42:51.792217 1436700 command_runner.go:130] > #   The currently recognized values are:
	I0131 02:42:51.792232 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0131 02:42:51.792248 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0131 02:42:51.792262 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0131 02:42:51.792276 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0131 02:42:51.792292 1436700 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0131 02:42:51.792306 1436700 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0131 02:42:51.792320 1436700 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0131 02:42:51.792335 1436700 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0131 02:42:51.792347 1436700 command_runner.go:130] > #   should be moved to the container's cgroup
	I0131 02:42:51.792358 1436700 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0131 02:42:51.792371 1436700 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0131 02:42:51.792382 1436700 command_runner.go:130] > runtime_type = "oci"
	I0131 02:42:51.792391 1436700 command_runner.go:130] > runtime_root = "/run/runc"
	I0131 02:42:51.792404 1436700 command_runner.go:130] > runtime_config_path = ""
	I0131 02:42:51.792415 1436700 command_runner.go:130] > monitor_path = ""
	I0131 02:42:51.792424 1436700 command_runner.go:130] > monitor_cgroup = ""
	I0131 02:42:51.792436 1436700 command_runner.go:130] > monitor_exec_cgroup = ""
	I0131 02:42:51.792450 1436700 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0131 02:42:51.792461 1436700 command_runner.go:130] > # running containers
	I0131 02:42:51.792470 1436700 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0131 02:42:51.792484 1436700 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0131 02:42:51.792520 1436700 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0131 02:42:51.792534 1436700 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0131 02:42:51.792546 1436700 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0131 02:42:51.792558 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0131 02:42:51.792569 1436700 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0131 02:42:51.792580 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0131 02:42:51.792589 1436700 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0131 02:42:51.792601 1436700 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0131 02:42:51.792615 1436700 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0131 02:42:51.792628 1436700 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0131 02:42:51.792642 1436700 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0131 02:42:51.792659 1436700 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0131 02:42:51.792675 1436700 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0131 02:42:51.792688 1436700 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0131 02:42:51.792711 1436700 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0131 02:42:51.792729 1436700 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0131 02:42:51.792742 1436700 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0131 02:42:51.792758 1436700 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0131 02:42:51.792767 1436700 command_runner.go:130] > # Example:
	I0131 02:42:51.792776 1436700 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0131 02:42:51.792788 1436700 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0131 02:42:51.792800 1436700 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0131 02:42:51.792813 1436700 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0131 02:42:51.792823 1436700 command_runner.go:130] > # cpuset = 0
	I0131 02:42:51.792833 1436700 command_runner.go:130] > # cpushares = "0-1"
	I0131 02:42:51.792843 1436700 command_runner.go:130] > # Where:
	I0131 02:42:51.792854 1436700 command_runner.go:130] > # The workload name is workload-type.
	I0131 02:42:51.792872 1436700 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0131 02:42:51.792885 1436700 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0131 02:42:51.792898 1436700 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0131 02:42:51.792915 1436700 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0131 02:42:51.792928 1436700 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0131 02:42:51.792937 1436700 command_runner.go:130] > # 
	I0131 02:42:51.792949 1436700 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0131 02:42:51.792958 1436700 command_runner.go:130] > #
	I0131 02:42:51.792969 1436700 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0131 02:42:51.792983 1436700 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0131 02:42:51.792997 1436700 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0131 02:42:51.793012 1436700 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0131 02:42:51.793025 1436700 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0131 02:42:51.793034 1436700 command_runner.go:130] > [crio.image]
	I0131 02:42:51.793045 1436700 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0131 02:42:51.793058 1436700 command_runner.go:130] > # default_transport = "docker://"
	I0131 02:42:51.793072 1436700 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0131 02:42:51.793087 1436700 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0131 02:42:51.793097 1436700 command_runner.go:130] > # global_auth_file = ""
	I0131 02:42:51.793108 1436700 command_runner.go:130] > # The image used to instantiate infra containers.
	I0131 02:42:51.793119 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:42:51.793129 1436700 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0131 02:42:51.793143 1436700 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0131 02:42:51.793157 1436700 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0131 02:42:51.793169 1436700 command_runner.go:130] > # This option supports live configuration reload.
	I0131 02:42:51.793180 1436700 command_runner.go:130] > # pause_image_auth_file = ""
	I0131 02:42:51.793194 1436700 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0131 02:42:51.793207 1436700 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0131 02:42:51.793218 1436700 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0131 02:42:51.793233 1436700 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0131 02:42:51.793244 1436700 command_runner.go:130] > # pause_command = "/pause"
	I0131 02:42:51.793258 1436700 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0131 02:42:51.793273 1436700 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0131 02:42:51.793288 1436700 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0131 02:42:51.793303 1436700 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0131 02:42:51.793316 1436700 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0131 02:42:51.793327 1436700 command_runner.go:130] > # signature_policy = ""
	I0131 02:42:51.793341 1436700 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0131 02:42:51.793355 1436700 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0131 02:42:51.793365 1436700 command_runner.go:130] > # changing them here.
	I0131 02:42:51.793376 1436700 command_runner.go:130] > # insecure_registries = [
	I0131 02:42:51.793382 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.793394 1436700 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0131 02:42:51.793406 1436700 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0131 02:42:51.793417 1436700 command_runner.go:130] > # image_volumes = "mkdir"
	I0131 02:42:51.793429 1436700 command_runner.go:130] > # Temporary directory to use for storing big files
	I0131 02:42:51.793441 1436700 command_runner.go:130] > # big_files_temporary_dir = ""
	I0131 02:42:51.793455 1436700 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0131 02:42:51.793469 1436700 command_runner.go:130] > # CNI plugins.
	I0131 02:42:51.793479 1436700 command_runner.go:130] > [crio.network]
	I0131 02:42:51.793493 1436700 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0131 02:42:51.793506 1436700 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0131 02:42:51.793518 1436700 command_runner.go:130] > # cni_default_network = ""
	I0131 02:42:51.793531 1436700 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0131 02:42:51.793543 1436700 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0131 02:42:51.793556 1436700 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0131 02:42:51.793566 1436700 command_runner.go:130] > # plugin_dirs = [
	I0131 02:42:51.793577 1436700 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0131 02:42:51.793586 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.793597 1436700 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0131 02:42:51.793607 1436700 command_runner.go:130] > [crio.metrics]
	I0131 02:42:51.793617 1436700 command_runner.go:130] > # Globally enable or disable metrics support.
	I0131 02:42:51.793627 1436700 command_runner.go:130] > enable_metrics = true
	I0131 02:42:51.793636 1436700 command_runner.go:130] > # Specify enabled metrics collectors.
	I0131 02:42:51.793648 1436700 command_runner.go:130] > # Per default all metrics are enabled.
	I0131 02:42:51.793662 1436700 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0131 02:42:51.793676 1436700 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0131 02:42:51.793689 1436700 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0131 02:42:51.793699 1436700 command_runner.go:130] > # metrics_collectors = [
	I0131 02:42:51.793711 1436700 command_runner.go:130] > # 	"operations",
	I0131 02:42:51.793725 1436700 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0131 02:42:51.793733 1436700 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0131 02:42:51.793744 1436700 command_runner.go:130] > # 	"operations_errors",
	I0131 02:42:51.793753 1436700 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0131 02:42:51.793763 1436700 command_runner.go:130] > # 	"image_pulls_by_name",
	I0131 02:42:51.793772 1436700 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0131 02:42:51.793783 1436700 command_runner.go:130] > # 	"image_pulls_failures",
	I0131 02:42:51.793792 1436700 command_runner.go:130] > # 	"image_pulls_successes",
	I0131 02:42:51.793802 1436700 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0131 02:42:51.793811 1436700 command_runner.go:130] > # 	"image_layer_reuse",
	I0131 02:42:51.793821 1436700 command_runner.go:130] > # 	"containers_oom_total",
	I0131 02:42:51.793832 1436700 command_runner.go:130] > # 	"containers_oom",
	I0131 02:42:51.793840 1436700 command_runner.go:130] > # 	"processes_defunct",
	I0131 02:42:51.793850 1436700 command_runner.go:130] > # 	"operations_total",
	I0131 02:42:51.793861 1436700 command_runner.go:130] > # 	"operations_latency_seconds",
	I0131 02:42:51.793873 1436700 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0131 02:42:51.793884 1436700 command_runner.go:130] > # 	"operations_errors_total",
	I0131 02:42:51.793892 1436700 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0131 02:42:51.793904 1436700 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0131 02:42:51.793916 1436700 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0131 02:42:51.793927 1436700 command_runner.go:130] > # 	"image_pulls_success_total",
	I0131 02:42:51.793938 1436700 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0131 02:42:51.793949 1436700 command_runner.go:130] > # 	"containers_oom_count_total",
	I0131 02:42:51.793958 1436700 command_runner.go:130] > # ]
	I0131 02:42:51.793970 1436700 command_runner.go:130] > # The port on which the metrics server will listen.
	I0131 02:42:51.793979 1436700 command_runner.go:130] > # metrics_port = 9090
	I0131 02:42:51.793989 1436700 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0131 02:42:51.793999 1436700 command_runner.go:130] > # metrics_socket = ""
	I0131 02:42:51.794012 1436700 command_runner.go:130] > # The certificate for the secure metrics server.
	I0131 02:42:51.794026 1436700 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0131 02:42:51.794040 1436700 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0131 02:42:51.794051 1436700 command_runner.go:130] > # certificate on any modification event.
	I0131 02:42:51.794061 1436700 command_runner.go:130] > # metrics_cert = ""
	I0131 02:42:51.794071 1436700 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0131 02:42:51.794083 1436700 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0131 02:42:51.794093 1436700 command_runner.go:130] > # metrics_key = ""
	I0131 02:42:51.794106 1436700 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0131 02:42:51.794115 1436700 command_runner.go:130] > [crio.tracing]
	I0131 02:42:51.794129 1436700 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0131 02:42:51.794139 1436700 command_runner.go:130] > # enable_tracing = false
	I0131 02:42:51.794149 1436700 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0131 02:42:51.794160 1436700 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0131 02:42:51.794173 1436700 command_runner.go:130] > # Number of samples to collect per million spans.
	I0131 02:42:51.794184 1436700 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0131 02:42:51.794198 1436700 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0131 02:42:51.794207 1436700 command_runner.go:130] > [crio.stats]
	I0131 02:42:51.794219 1436700 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0131 02:42:51.794231 1436700 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0131 02:42:51.794240 1436700 command_runner.go:130] > # stats_collection_period = 0
	I0131 02:42:51.794282 1436700 command_runner.go:130] ! time="2024-01-31 02:42:51.775245469Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0131 02:42:51.794304 1436700 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0131 02:42:51.794382 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:42:51.794394 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:42:51.794408 1436700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 02:42:51.794441 1436700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-263108 NodeName:multinode-263108-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 02:42:51.794602 1436700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-263108-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 02:42:51.794671 1436700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-263108-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 02:42:51.794745 1436700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 02:42:51.803238 1436700 command_runner.go:130] > kubeadm
	I0131 02:42:51.803256 1436700 command_runner.go:130] > kubectl
	I0131 02:42:51.803263 1436700 command_runner.go:130] > kubelet
	I0131 02:42:51.803287 1436700 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 02:42:51.803356 1436700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0131 02:42:51.811251 1436700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0131 02:42:51.826994 1436700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 02:42:51.844035 1436700 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I0131 02:42:51.847363 1436700 command_runner.go:130] > 192.168.39.109	control-plane.minikube.internal
	I0131 02:42:51.847590 1436700 host.go:66] Checking if "multinode-263108" exists ...
	I0131 02:42:51.847889 1436700 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:42:51.847925 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:42:51.847960 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:42:51.863269 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0131 02:42:51.863751 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:42:51.864229 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:42:51.864248 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:42:51.864605 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:42:51.864799 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:42:51.864936 1436700 start.go:304] JoinCluster: &{Name:multinode-263108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-263108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.60 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:42:51.865076 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0131 02:42:51.865136 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:42:51.868003 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:42:51.868411 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:42:51.868439 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:42:51.868657 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:42:51.868888 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:42:51.869069 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:42:51.869252 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:42:52.052155 1436700 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g0hc5c.v5trwtnkb1wu49g1 --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 02:42:52.056121 1436700 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0131 02:42:52.056186 1436700 host.go:66] Checking if "multinode-263108" exists ...
	I0131 02:42:52.056646 1436700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:42:52.056719 1436700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:42:52.073097 1436700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0131 02:42:52.073579 1436700 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:42:52.074050 1436700 main.go:141] libmachine: Using API Version  1
	I0131 02:42:52.074075 1436700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:42:52.074447 1436700 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:42:52.074679 1436700 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:42:52.074960 1436700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-263108-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0131 02:42:52.074995 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:42:52.078019 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:42:52.078445 1436700 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:42:52.078499 1436700 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:42:52.078612 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:42:52.078804 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:42:52.079027 1436700 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:42:52.079217 1436700 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:42:52.276969 1436700 command_runner.go:130] > node/multinode-263108-m03 cordoned
	I0131 02:42:55.316825 1436700 command_runner.go:130] > pod "busybox-5b5d89c9d6-ft7n7" has DeletionTimestamp older than 1 seconds, skipping
	I0131 02:42:55.316861 1436700 command_runner.go:130] > node/multinode-263108-m03 drained
	I0131 02:42:55.318276 1436700 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0131 02:42:55.318303 1436700 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-88m7n, kube-system/kube-proxy-mpxjh
	I0131 02:42:55.318328 1436700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-263108-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.243340119s)
	I0131 02:42:55.318343 1436700 node.go:108] successfully drained node "m03"
	I0131 02:42:55.318763 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:42:55.319006 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:42:55.319323 1436700 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0131 02:42:55.319418 1436700 round_trippers.go:463] DELETE https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:55.319432 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:55.319443 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:55.319449 1436700 round_trippers.go:473]     Content-Type: application/json
	I0131 02:42:55.319460 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:55.330887 1436700 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0131 02:42:55.330909 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:55.330915 1436700 round_trippers.go:580]     Audit-Id: b5f0855c-d016-4368-9af4-c2707baed5e2
	I0131 02:42:55.330921 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:55.330926 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:55.330931 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:55.330939 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:55.330947 1436700 round_trippers.go:580]     Content-Length: 171
	I0131 02:42:55.330954 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:55 GMT
	I0131 02:42:55.330981 1436700 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-263108-m03","kind":"nodes","uid":"5d8d8dfa-72be-4459-b7bc-217aef0cc608"}}
	I0131 02:42:55.331016 1436700 node.go:124] successfully deleted node "m03"
	I0131 02:42:55.331030 1436700 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0131 02:42:55.331058 1436700 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0131 02:42:55.331082 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g0hc5c.v5trwtnkb1wu49g1 --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-263108-m03"
	I0131 02:42:55.386524 1436700 command_runner.go:130] > [preflight] Running pre-flight checks
	I0131 02:42:55.537887 1436700 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0131 02:42:55.537939 1436700 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0131 02:42:55.597464 1436700 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 02:42:55.597771 1436700 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 02:42:55.597788 1436700 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0131 02:42:55.728634 1436700 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0131 02:42:56.252108 1436700 command_runner.go:130] > This node has joined the cluster:
	I0131 02:42:56.252137 1436700 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0131 02:42:56.252144 1436700 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0131 02:42:56.252151 1436700 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0131 02:42:56.255613 1436700 command_runner.go:130] ! W0131 02:42:55.378394    2408 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0131 02:42:56.255634 1436700 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0131 02:42:56.255642 1436700 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0131 02:42:56.255651 1436700 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0131 02:42:56.255675 1436700 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0131 02:42:56.514082 1436700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=multinode-263108 minikube.k8s.io/updated_at=2024_01_31T02_42_56_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 02:42:56.609002 1436700 command_runner.go:130] > node/multinode-263108-m02 labeled
	I0131 02:42:56.624341 1436700 command_runner.go:130] > node/multinode-263108-m03 labeled
	I0131 02:42:56.625766 1436700 start.go:306] JoinCluster complete in 4.760827501s
	I0131 02:42:56.625796 1436700 cni.go:84] Creating CNI manager for ""
	I0131 02:42:56.625804 1436700 cni.go:136] 3 nodes found, recommending kindnet
	I0131 02:42:56.625867 1436700 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0131 02:42:56.631812 1436700 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0131 02:42:56.631844 1436700 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0131 02:42:56.631859 1436700 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0131 02:42:56.631869 1436700 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0131 02:42:56.631877 1436700 command_runner.go:130] > Access: 2024-01-31 02:38:46.128809878 +0000
	I0131 02:42:56.631886 1436700 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0131 02:42:56.631896 1436700 command_runner.go:130] > Change: 2024-01-31 02:38:44.179809878 +0000
	I0131 02:42:56.631906 1436700 command_runner.go:130] >  Birth: -
	I0131 02:42:56.632052 1436700 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0131 02:42:56.632074 1436700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0131 02:42:56.650371 1436700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0131 02:42:56.981151 1436700 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0131 02:42:56.984907 1436700 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0131 02:42:56.988147 1436700 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0131 02:42:57.005123 1436700 command_runner.go:130] > daemonset.apps/kindnet configured
	I0131 02:42:57.008197 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:42:57.008456 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:42:57.008804 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0131 02:42:57.008821 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.008829 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.008835 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.011104 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:57.011129 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.011137 1436700 round_trippers.go:580]     Audit-Id: 7af5516b-7f7b-4087-ab53-f3dd444b427d
	I0131 02:42:57.011143 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.011148 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.011156 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.011162 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.011173 1436700 round_trippers.go:580]     Content-Length: 291
	I0131 02:42:57.011178 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.011205 1436700 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2554d8bc-c0ad-485d-a9be-18a695e4434b","resourceVersion":"933","creationTimestamp":"2024-01-31T02:28:17Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0131 02:42:57.011306 1436700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-263108" context rescaled to 1 replicas
	I0131 02:42:57.011333 1436700 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0131 02:42:57.013485 1436700 out.go:177] * Verifying Kubernetes components...
	I0131 02:42:57.015083 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:42:57.028451 1436700 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:42:57.028678 1436700 kapi.go:59] client config for multinode-263108: &rest.Config{Host:"https://192.168.39.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/multinode-263108/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:42:57.028904 1436700 node_ready.go:35] waiting up to 6m0s for node "multinode-263108-m03" to be "Ready" ...
	I0131 02:42:57.028974 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:57.028983 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.028990 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.028996 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.031433 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:57.031454 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.031466 1436700 round_trippers.go:580]     Audit-Id: a02f4a92-04e0-4d00-a86e-1550eb6f148d
	I0131 02:42:57.031475 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.031484 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.031489 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.031494 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.031499 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.031868 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"6ef12ead-7c8a-46ba-a545-cbe16f681ad3","resourceVersion":"1279","creationTimestamp":"2024-01-31T02:42:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_42_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:42:57.032300 1436700 node_ready.go:49] node "multinode-263108-m03" has status "Ready":"True"
	I0131 02:42:57.032328 1436700 node_ready.go:38] duration metric: took 3.405369ms waiting for node "multinode-263108-m03" to be "Ready" ...
	I0131 02:42:57.032340 1436700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:42:57.032415 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods
	I0131 02:42:57.032426 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.032434 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.032444 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.036570 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:42:57.036595 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.036605 1436700 round_trippers.go:580]     Audit-Id: 416ffe12-84b5-4d4c-b4e4-3adf70a6f948
	I0131 02:42:57.036613 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.036621 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.036628 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.036636 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.036644 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.038080 1436700 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1283"},"items":[{"metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82071 chars]
	I0131 02:42:57.040497 1436700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.040578 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-skqw4
	I0131 02:42:57.040593 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.040603 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.040613 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.042718 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:57.042739 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.042750 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.042758 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.042764 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.042771 1436700 round_trippers.go:580]     Audit-Id: d7f5257c-696e-448a-aab8-f1f3645a1392
	I0131 02:42:57.042776 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.042782 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.042887 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-skqw4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"713e1df7-54be-4322-986d-b6d7db88c1c7","resourceVersion":"918","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b1efe95-d0ac-494e-9e05-ee7a1a24e8d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0131 02:42:57.043416 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:57.043432 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.043443 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.043454 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.045375 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:42:57.045392 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.045401 1436700 round_trippers.go:580]     Audit-Id: 5bbf0e73-145b-4bbb-bf02-296fb5709aee
	I0131 02:42:57.045408 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.045416 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.045424 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.045433 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.045442 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.045664 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:42:57.046004 1436700 pod_ready.go:92] pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:57.046021 1436700 pod_ready.go:81] duration metric: took 5.50229ms waiting for pod "coredns-5dd5756b68-skqw4" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.046030 1436700 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.046088 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-263108
	I0131 02:42:57.046096 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.046103 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.046109 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.048257 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:57.048272 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.048278 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.048286 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.048291 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.048300 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.048308 1436700 round_trippers.go:580]     Audit-Id: 08fffcb5-4f46-4ec1-9f49-774fdb2a7529
	I0131 02:42:57.048317 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.048599 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-263108","namespace":"kube-system","uid":"cf8c4ba5-fce9-4570-a204-0b713281fc21","resourceVersion":"940","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.109:2379","kubernetes.io/config.hash":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.mirror":"65cf9b3c171af227e879742789ab79ee","kubernetes.io/config.seen":"2024-01-31T02:28:18.078200982Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0131 02:42:57.049007 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:57.049022 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.049030 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.049036 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.051041 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:42:57.051056 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.051063 1436700 round_trippers.go:580]     Audit-Id: f2c39233-4ce6-4b53-8377-946b6df1caad
	I0131 02:42:57.051068 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.051073 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.051081 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.051089 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.051097 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.051385 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:42:57.051724 1436700 pod_ready.go:92] pod "etcd-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:57.051745 1436700 pod_ready.go:81] duration metric: took 5.706791ms waiting for pod "etcd-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.051764 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.051826 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-263108
	I0131 02:42:57.051833 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.051841 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.051846 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.053666 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:42:57.053681 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.053687 1436700 round_trippers.go:580]     Audit-Id: fe46d7e0-b55c-4a5f-95aa-3a10802e9394
	I0131 02:42:57.053692 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.053698 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.053704 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.053713 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.053721 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.053948 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-263108","namespace":"kube-system","uid":"0c527200-696b-4681-af91-226016437113","resourceVersion":"910","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.109:8443","kubernetes.io/config.hash":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.mirror":"d670ff05d0032fcc9ae24f8fc09df250","kubernetes.io/config.seen":"2024-01-31T02:28:18.078204875Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0131 02:42:57.054360 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:57.054374 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.054382 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.054387 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.056078 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:42:57.056089 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.056095 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.056100 1436700 round_trippers.go:580]     Audit-Id: 65c02643-09eb-4601-ac7f-3d017cf93958
	I0131 02:42:57.056105 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.056110 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.056116 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.056124 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.056449 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:42:57.056728 1436700 pod_ready.go:92] pod "kube-apiserver-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:57.056742 1436700 pod_ready.go:81] duration metric: took 4.96916ms waiting for pod "kube-apiserver-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.056751 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.056808 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-263108
	I0131 02:42:57.056816 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.056823 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.056829 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.058779 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:42:57.058800 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.058810 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.058822 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.058830 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.058840 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.058852 1436700 round_trippers.go:580]     Audit-Id: 424a6957-874c-43e1-b16f-70de71110a78
	I0131 02:42:57.058863 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.059179 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-263108","namespace":"kube-system","uid":"056ea293-6261-4e6c-9b3f-9fdc7d0727a2","resourceVersion":"914","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.mirror":"19e16e470f3c55d41e223486b2026f1d","kubernetes.io/config.seen":"2024-01-31T02:28:18.078205997Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0131 02:42:57.059567 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:57.059580 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.059588 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.059596 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.061223 1436700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0131 02:42:57.061242 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.061252 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.061260 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.061272 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.061280 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.061291 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.061301 1436700 round_trippers.go:580]     Audit-Id: ec7a29f5-6471-4ccf-91bd-952a4f411d28
	I0131 02:42:57.061651 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:42:57.061978 1436700 pod_ready.go:92] pod "kube-controller-manager-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:57.061995 1436700 pod_ready.go:81] duration metric: took 5.238079ms waiting for pod "kube-controller-manager-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.062003 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:57.229372 1436700 request.go:629] Waited for 167.300818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:42:57.229460 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:42:57.229471 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.229483 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.229495 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.232700 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:57.232728 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.232739 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.232755 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.232764 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.232772 1436700 round_trippers.go:580]     Audit-Id: cb8b68b4-187a-4a34-aa6e-b595f6f2f5fa
	I0131 02:42:57.232780 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.232788 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.232906 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mpxjh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3a11b226-7a8e-4b25-a409-acc439d4bdfb","resourceVersion":"1224","creationTimestamp":"2024-01-31T02:30:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0131 02:42:57.429777 1436700 request.go:629] Waited for 196.364458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:57.429852 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:57.429859 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.429869 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.429878 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.432629 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:57.432655 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.432665 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.432674 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.432682 1436700 round_trippers.go:580]     Audit-Id: 8a95c0af-871f-4323-b794-926b23b8eabe
	I0131 02:42:57.432693 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.432703 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.432711 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.432879 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"6ef12ead-7c8a-46ba-a545-cbe16f681ad3","resourceVersion":"1279","creationTimestamp":"2024-01-31T02:42:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_42_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:42:57.629451 1436700 request.go:629] Waited for 66.272737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:42:57.629532 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:42:57.629542 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.629556 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.629566 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.632491 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:57.632522 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.632533 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.632542 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.632549 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.632557 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.632566 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.632575 1436700 round_trippers.go:580]     Audit-Id: 288299d9-f243-4dc2-8c7e-f3add2a526e7
	I0131 02:42:57.632730 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mpxjh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3a11b226-7a8e-4b25-a409-acc439d4bdfb","resourceVersion":"1224","creationTimestamp":"2024-01-31T02:30:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0131 02:42:57.829591 1436700 request.go:629] Waited for 196.374103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:57.829661 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:57.829666 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:57.829674 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:57.829684 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:57.832855 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:57.832876 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:57.832884 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:57.832890 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:57 GMT
	I0131 02:42:57.832895 1436700 round_trippers.go:580]     Audit-Id: 905b12e6-a8c4-43ef-9cca-02b5284559f5
	I0131 02:42:57.832900 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:57.832904 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:57.832909 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:57.833085 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"6ef12ead-7c8a-46ba-a545-cbe16f681ad3","resourceVersion":"1279","creationTimestamp":"2024-01-31T02:42:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_42_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:42:58.062330 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mpxjh
	I0131 02:42:58.062355 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:58.062364 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:58.062370 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:58.065510 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:58.065542 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:58.065554 1436700 round_trippers.go:580]     Audit-Id: cbe7c1f3-2047-452e-b079-b4aa84a4b4e9
	I0131 02:42:58.065563 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:58.065571 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:58.065580 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:58.065588 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:58.065596 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:58 GMT
	I0131 02:42:58.065737 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mpxjh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3a11b226-7a8e-4b25-a409-acc439d4bdfb","resourceVersion":"1294","creationTimestamp":"2024-01-31T02:30:42Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:30:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0131 02:42:58.229721 1436700 request.go:629] Waited for 163.420332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:58.229801 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m03
	I0131 02:42:58.229806 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:58.229814 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:58.229821 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:58.233036 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:58.233062 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:58.233073 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:58.233082 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:58.233091 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:58 GMT
	I0131 02:42:58.233166 1436700 round_trippers.go:580]     Audit-Id: 61a4f3aa-b8ba-4b3b-af07-e1c67aa6128e
	I0131 02:42:58.233202 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:58.233213 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:58.233317 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m03","uid":"6ef12ead-7c8a-46ba-a545-cbe16f681ad3","resourceVersion":"1279","creationTimestamp":"2024-01-31T02:42:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_42_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:42:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:42:58.233614 1436700 pod_ready.go:92] pod "kube-proxy-mpxjh" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:58.233632 1436700 pod_ready.go:81] duration metric: took 1.171619882s waiting for pod "kube-proxy-mpxjh" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:58.233642 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:58.429105 1436700 request.go:629] Waited for 195.341363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:42:58.429176 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x5jb7
	I0131 02:42:58.429181 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:58.429190 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:58.429196 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:58.432440 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:58.432460 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:58.432467 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:58.432472 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:58.432478 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:58.432483 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:58 GMT
	I0131 02:42:58.432493 1436700 round_trippers.go:580]     Audit-Id: 9886e552-e3ec-41a1-a63e-1b18cbafc7fa
	I0131 02:42:58.432500 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:58.432670 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x5jb7","generateName":"kube-proxy-","namespace":"kube-system","uid":"4dc3dae9-7781-4832-88ba-08a17ecfe557","resourceVersion":"1109","creationTimestamp":"2024-01-31T02:29:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:29:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0131 02:42:58.629533 1436700 request.go:629] Waited for 196.366689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:42:58.629612 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108-m02
	I0131 02:42:58.629617 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:58.629625 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:58.629634 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:58.634230 1436700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0131 02:42:58.634251 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:58.634258 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:58.634264 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:58 GMT
	I0131 02:42:58.634269 1436700 round_trippers.go:580]     Audit-Id: 2a5577b5-7a96-4079-81e3-062ac141ecdf
	I0131 02:42:58.634280 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:58.634290 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:58.634298 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:58.634507 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108-m02","uid":"a3f4c8e0-3882-4367-a171-122b15d899d3","resourceVersion":"1278","creationTimestamp":"2024-01-31T02:41:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_31T02_42_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:41:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 4222 chars]
	I0131 02:42:58.634785 1436700 pod_ready.go:92] pod "kube-proxy-x5jb7" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:58.634801 1436700 pod_ready.go:81] duration metric: took 401.153894ms waiting for pod "kube-proxy-x5jb7" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:58.634810 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:58.830009 1436700 request.go:629] Waited for 195.131421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:42:58.830091 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x85lz
	I0131 02:42:58.830104 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:58.830131 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:58.830142 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:58.833014 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:58.833034 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:58.833041 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:58 GMT
	I0131 02:42:58.833053 1436700 round_trippers.go:580]     Audit-Id: 25741c5b-ee06-45a5-8cf9-3082b4f326e4
	I0131 02:42:58.833061 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:58.833071 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:58.833078 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:58.833086 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:58.833301 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x85lz","generateName":"kube-proxy-","namespace":"kube-system","uid":"36e014b9-154e-43f4-b694-7f05bd31baef","resourceVersion":"837","creationTimestamp":"2024-01-31T02:28:30Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4a181bec-d2f7-4c33-b3ee-388920521d1c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a181bec-d2f7-4c33-b3ee-388920521d1c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0131 02:42:59.029053 1436700 request.go:629] Waited for 195.271922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:59.029126 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:59.029131 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:59.029142 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:59.029148 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:59.032732 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:59.032756 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:59.032768 1436700 round_trippers.go:580]     Audit-Id: 68a9abfc-e26c-4515-9405-8aff3ca85323
	I0131 02:42:59.032778 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:59.032787 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:59.032799 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:59.032804 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:59.032810 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:59 GMT
	I0131 02:42:59.033327 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:42:59.033791 1436700 pod_ready.go:92] pod "kube-proxy-x85lz" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:59.033819 1436700 pod_ready.go:81] duration metric: took 399.002035ms waiting for pod "kube-proxy-x85lz" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:59.033841 1436700 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:59.229597 1436700 request.go:629] Waited for 195.667298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:42:59.229709 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-263108
	I0131 02:42:59.229721 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:59.229732 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:59.229746 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:59.232927 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:59.232948 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:59.232956 1436700 round_trippers.go:580]     Audit-Id: ede4e886-1613-4f24-9a77-f2c61da60345
	I0131 02:42:59.232961 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:59.232967 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:59.232972 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:59.232977 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:59.232982 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:59 GMT
	I0131 02:42:59.233215 1436700 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-263108","namespace":"kube-system","uid":"7cc8534f-0f2b-457e-9942-e49d0f507875","resourceVersion":"941","creationTimestamp":"2024-01-31T02:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.mirror":"7320cc932f9ec0e3160c3b0ecdf22c62","kubernetes.io/config.seen":"2024-01-31T02:28:18.078207038Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-31T02:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0131 02:42:59.429974 1436700 request.go:629] Waited for 196.385489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:59.430086 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes/multinode-263108
	I0131 02:42:59.430098 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:59.430111 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:59.430124 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:59.433094 1436700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0131 02:42:59.433117 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:59.433127 1436700 round_trippers.go:580]     Audit-Id: be4b6db8-47d7-48b2-8dc0-6b87c8373886
	I0131 02:42:59.433133 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:59.433138 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:59.433143 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:59.433148 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:59.433153 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:59 GMT
	I0131 02:42:59.433525 1436700 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-31T02:28:14Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0131 02:42:59.433871 1436700 pod_ready.go:92] pod "kube-scheduler-multinode-263108" in "kube-system" namespace has status "Ready":"True"
	I0131 02:42:59.433891 1436700 pod_ready.go:81] duration metric: took 400.042659ms waiting for pod "kube-scheduler-multinode-263108" in "kube-system" namespace to be "Ready" ...
	I0131 02:42:59.433902 1436700 pod_ready.go:38] duration metric: took 2.401551242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:42:59.433916 1436700 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 02:42:59.433964 1436700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:42:59.446980 1436700 system_svc.go:56] duration metric: took 13.057567ms WaitForService to wait for kubelet.
	I0131 02:42:59.447003 1436700 kubeadm.go:581] duration metric: took 2.435649441s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 02:42:59.447021 1436700 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:42:59.629478 1436700 request.go:629] Waited for 182.371833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.109:8443/api/v1/nodes
	I0131 02:42:59.629566 1436700 round_trippers.go:463] GET https://192.168.39.109:8443/api/v1/nodes
	I0131 02:42:59.629571 1436700 round_trippers.go:469] Request Headers:
	I0131 02:42:59.629579 1436700 round_trippers.go:473]     Accept: application/json, */*
	I0131 02:42:59.629589 1436700 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0131 02:42:59.632726 1436700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0131 02:42:59.632748 1436700 round_trippers.go:577] Response Headers:
	I0131 02:42:59.632755 1436700 round_trippers.go:580]     Audit-Id: 6ee43e92-62fe-481b-aef7-46d028097bfb
	I0131 02:42:59.632761 1436700 round_trippers.go:580]     Cache-Control: no-cache, private
	I0131 02:42:59.632766 1436700 round_trippers.go:580]     Content-Type: application/json
	I0131 02:42:59.632771 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 86d76fa4-0400-4869-9e1d-f7ee2bd19a7f
	I0131 02:42:59.632776 1436700 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb44b2cf-cd74-45fa-81c8-32ad3c081e69
	I0131 02:42:59.632782 1436700 round_trippers.go:580]     Date: Wed, 31 Jan 2024 02:42:59 GMT
	I0131 02:42:59.633353 1436700 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1300"},"items":[{"metadata":{"name":"multinode-263108","uid":"4f9b19d6-7f0c-4abc-9d55-6c9bd1eb4af0","resourceVersion":"954","creationTimestamp":"2024-01-31T02:28:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-263108","kubernetes.io/os":"linux","minikube.k8s.io/commit":"de6311e496aefb62bd53fcfd0fb6b150999d9424","minikube.k8s.io/name":"multinode-263108","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_31T02_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16695 chars]
	I0131 02:42:59.634144 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:42:59.634169 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:42:59.634183 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:42:59.634188 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:42:59.634194 1436700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:42:59.634199 1436700 node_conditions.go:123] node cpu capacity is 2
	I0131 02:42:59.634204 1436700 node_conditions.go:105] duration metric: took 187.178703ms to run NodePressure ...
	I0131 02:42:59.634217 1436700 start.go:228] waiting for startup goroutines ...
	I0131 02:42:59.634253 1436700 start.go:242] writing updated cluster config ...
	I0131 02:42:59.634647 1436700 ssh_runner.go:195] Run: rm -f paused
	I0131 02:42:59.687011 1436700 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 02:42:59.689941 1436700 out.go:177] * Done! kubectl is now configured to use "multinode-263108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 02:38:44 UTC, ends at Wed 2024-01-31 02:43:00 UTC. --
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.822654011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706668980822634922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a4fba856-cf10-40e0-a084-a495ba7574be name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.824161729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5af2fc6-7318-43e9-ada7-c174bb2619ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.824273068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5af2fc6-7318-43e9-ada7-c174bb2619ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.824590252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e3e3d2cb5a7a301f023a69a8004e6674a788c3b99a4bbe3d4732fe87cb304ad,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706668770574490584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3601bb1252da361bbcfcc520d728a4600c9a7a6709cdb5f0bfbf4e05370abb4,PodSandboxId:1c36a40c0d92c2304f21837c2a64a0c31eb98c22e3c1f70c5c1403b428e8491e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706668768502346788,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dlpzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ec0b181-2e2d-4e23-9261-d2be8a85e401,},Annotations:map[string]string{io.kubernetes.container.hash: 16e95c81,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a911b4aeb713ca91d415535a1d1c9ec77496906f110a9201e42fb7f672d5d,PodSandboxId:7b66b8f2892d6b791137abb6e4216c979ee50da9f0d46a094e4e9d42e687aa1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706668766051939574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-skqw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 713e1df7-54be-4322-986d-b6d7db88c1c7,},Annotations:map[string]string{io.kubernetes.container.hash: a02aa8c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ffbc733f0a7a0b36c0f8cfd8466a4ce2b3e60acde57e63d8b652b8f8ddee90,PodSandboxId:e5e7ce3781dc872e650edd48223cb8e3bf85cdcf4c2ffab6d7424d9d98fb7285,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706668760736263850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-knvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8e734b81-4d44-4c96-8439-0ef800021bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 519853ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e858bc707d09132b1e518a2c2181736e4c09ce55a58b8aafa6c23ff8692cae,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706668758623385185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a89f9cac66f16c25935156705417ab09e049b4727fe064703799f2284aea66,PodSandboxId:48e234a4e80a4ecbe53ce69b4f273fc8544e164230b64abf0f2ab2e02cf8c5cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706668758448076767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36e014b9-154e-43f4-b694-7f05bd31
baef,},Annotations:map[string]string{io.kubernetes.container.hash: 94f0b9f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c5e3ce0fa5af41914cec39488462d499a693a119c605c12522564e4c7a90f1,PodSandboxId:8e305bc4ffc862e38af72e2c04b7ed2fa29413cdbaa29583dbe7c9a1319c9283,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706668752900665725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cf9b3c171af227e879742789ab79ee,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3464868c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c714a783871b7613686f9fe6e0f214d3eb54ca5ab08f30eb0ad7faa7520e70e7,PodSandboxId:ef8f48bf59c3a1ee8a4a5bdc322a7825e4f8442a7897289eec3440ba48a2806d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706668752732380611,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7320cc932f9ec0e3160c3b0ecdf22c62,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a84020a2b1ba9d04c5f50378538d7cedd4abb2e7a18c1aad04b5a80d54dda31,PodSandboxId:7973fb4e50033d0ced741ad2297e7c32bb8af91a36e345404c7b672850d10a85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706668752602066872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e16e470f3c55d41e223486b2026f1d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f7756d2ff7480c9783154263e413ddf24b0f60ec440d991b80a2e469285ba6,PodSandboxId:e0a8161c4af5beb7fea249b8e78cacb21ac9d64770bc86fc2458f4a9ed244577,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706668752556022446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d670ff05d0032fcc9ae24f8fc09df250,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2b74cab2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5af2fc6-7318-43e9-ada7-c174bb2619ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.878119280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3287abac-6500-4e1d-8185-153581321418 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.878202629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3287abac-6500-4e1d-8185-153581321418 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.879467273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f2bb1817-d32b-4ae8-a2d8-d9e789be35ca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.880109926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706668980880083768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f2bb1817-d32b-4ae8-a2d8-d9e789be35ca name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.881017391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=afe251d0-6ad8-4d0d-8cc8-e1d2e0d6c5cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.881082366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=afe251d0-6ad8-4d0d-8cc8-e1d2e0d6c5cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.881336886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e3e3d2cb5a7a301f023a69a8004e6674a788c3b99a4bbe3d4732fe87cb304ad,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706668770574490584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3601bb1252da361bbcfcc520d728a4600c9a7a6709cdb5f0bfbf4e05370abb4,PodSandboxId:1c36a40c0d92c2304f21837c2a64a0c31eb98c22e3c1f70c5c1403b428e8491e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706668768502346788,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dlpzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ec0b181-2e2d-4e23-9261-d2be8a85e401,},Annotations:map[string]string{io.kubernetes.container.hash: 16e95c81,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a911b4aeb713ca91d415535a1d1c9ec77496906f110a9201e42fb7f672d5d,PodSandboxId:7b66b8f2892d6b791137abb6e4216c979ee50da9f0d46a094e4e9d42e687aa1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706668766051939574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-skqw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 713e1df7-54be-4322-986d-b6d7db88c1c7,},Annotations:map[string]string{io.kubernetes.container.hash: a02aa8c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ffbc733f0a7a0b36c0f8cfd8466a4ce2b3e60acde57e63d8b652b8f8ddee90,PodSandboxId:e5e7ce3781dc872e650edd48223cb8e3bf85cdcf4c2ffab6d7424d9d98fb7285,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706668760736263850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-knvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8e734b81-4d44-4c96-8439-0ef800021bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 519853ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e858bc707d09132b1e518a2c2181736e4c09ce55a58b8aafa6c23ff8692cae,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706668758623385185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a89f9cac66f16c25935156705417ab09e049b4727fe064703799f2284aea66,PodSandboxId:48e234a4e80a4ecbe53ce69b4f273fc8544e164230b64abf0f2ab2e02cf8c5cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706668758448076767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36e014b9-154e-43f4-b694-7f05bd31
baef,},Annotations:map[string]string{io.kubernetes.container.hash: 94f0b9f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c5e3ce0fa5af41914cec39488462d499a693a119c605c12522564e4c7a90f1,PodSandboxId:8e305bc4ffc862e38af72e2c04b7ed2fa29413cdbaa29583dbe7c9a1319c9283,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706668752900665725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cf9b3c171af227e879742789ab79ee,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3464868c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c714a783871b7613686f9fe6e0f214d3eb54ca5ab08f30eb0ad7faa7520e70e7,PodSandboxId:ef8f48bf59c3a1ee8a4a5bdc322a7825e4f8442a7897289eec3440ba48a2806d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706668752732380611,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7320cc932f9ec0e3160c3b0ecdf22c62,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a84020a2b1ba9d04c5f50378538d7cedd4abb2e7a18c1aad04b5a80d54dda31,PodSandboxId:7973fb4e50033d0ced741ad2297e7c32bb8af91a36e345404c7b672850d10a85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706668752602066872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e16e470f3c55d41e223486b2026f1d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f7756d2ff7480c9783154263e413ddf24b0f60ec440d991b80a2e469285ba6,PodSandboxId:e0a8161c4af5beb7fea249b8e78cacb21ac9d64770bc86fc2458f4a9ed244577,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706668752556022446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d670ff05d0032fcc9ae24f8fc09df250,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2b74cab2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=afe251d0-6ad8-4d0d-8cc8-e1d2e0d6c5cf name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.932632446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2915b1cc-6e77-4545-b8c8-b6e30696b356 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.932689404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2915b1cc-6e77-4545-b8c8-b6e30696b356 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.934755656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6aeecef8-8606-427a-81cc-4053cb3a65de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.935320872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706668980935208179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6aeecef8-8606-427a-81cc-4053cb3a65de name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.936014848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e1342fdd-9cdf-426c-a30d-790c0ec5a832 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.936196438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e1342fdd-9cdf-426c-a30d-790c0ec5a832 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.936748953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e3e3d2cb5a7a301f023a69a8004e6674a788c3b99a4bbe3d4732fe87cb304ad,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706668770574490584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3601bb1252da361bbcfcc520d728a4600c9a7a6709cdb5f0bfbf4e05370abb4,PodSandboxId:1c36a40c0d92c2304f21837c2a64a0c31eb98c22e3c1f70c5c1403b428e8491e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706668768502346788,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dlpzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ec0b181-2e2d-4e23-9261-d2be8a85e401,},Annotations:map[string]string{io.kubernetes.container.hash: 16e95c81,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a911b4aeb713ca91d415535a1d1c9ec77496906f110a9201e42fb7f672d5d,PodSandboxId:7b66b8f2892d6b791137abb6e4216c979ee50da9f0d46a094e4e9d42e687aa1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706668766051939574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-skqw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 713e1df7-54be-4322-986d-b6d7db88c1c7,},Annotations:map[string]string{io.kubernetes.container.hash: a02aa8c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ffbc733f0a7a0b36c0f8cfd8466a4ce2b3e60acde57e63d8b652b8f8ddee90,PodSandboxId:e5e7ce3781dc872e650edd48223cb8e3bf85cdcf4c2ffab6d7424d9d98fb7285,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706668760736263850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-knvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8e734b81-4d44-4c96-8439-0ef800021bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 519853ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e858bc707d09132b1e518a2c2181736e4c09ce55a58b8aafa6c23ff8692cae,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706668758623385185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a89f9cac66f16c25935156705417ab09e049b4727fe064703799f2284aea66,PodSandboxId:48e234a4e80a4ecbe53ce69b4f273fc8544e164230b64abf0f2ab2e02cf8c5cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706668758448076767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36e014b9-154e-43f4-b694-7f05bd31
baef,},Annotations:map[string]string{io.kubernetes.container.hash: 94f0b9f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c5e3ce0fa5af41914cec39488462d499a693a119c605c12522564e4c7a90f1,PodSandboxId:8e305bc4ffc862e38af72e2c04b7ed2fa29413cdbaa29583dbe7c9a1319c9283,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706668752900665725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cf9b3c171af227e879742789ab79ee,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3464868c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c714a783871b7613686f9fe6e0f214d3eb54ca5ab08f30eb0ad7faa7520e70e7,PodSandboxId:ef8f48bf59c3a1ee8a4a5bdc322a7825e4f8442a7897289eec3440ba48a2806d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706668752732380611,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7320cc932f9ec0e3160c3b0ecdf22c62,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a84020a2b1ba9d04c5f50378538d7cedd4abb2e7a18c1aad04b5a80d54dda31,PodSandboxId:7973fb4e50033d0ced741ad2297e7c32bb8af91a36e345404c7b672850d10a85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706668752602066872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e16e470f3c55d41e223486b2026f1d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f7756d2ff7480c9783154263e413ddf24b0f60ec440d991b80a2e469285ba6,PodSandboxId:e0a8161c4af5beb7fea249b8e78cacb21ac9d64770bc86fc2458f4a9ed244577,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706668752556022446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d670ff05d0032fcc9ae24f8fc09df250,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2b74cab2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e1342fdd-9cdf-426c-a30d-790c0ec5a832 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.976486215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2b309c95-4dcf-400c-93b2-07889efe6379 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.976542500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2b309c95-4dcf-400c-93b2-07889efe6379 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.977926501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=45e3b7b0-4d68-4ca6-a0db-b2b00fc1be50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.978296022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706668980978285203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=45e3b7b0-4d68-4ca6-a0db-b2b00fc1be50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.978924637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0f88ea44-0d7c-4c9f-b8bf-fa0b0934ac69 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.978977429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0f88ea44-0d7c-4c9f-b8bf-fa0b0934ac69 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:43:00 multinode-263108 crio[713]: time="2024-01-31 02:43:00.979193073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e3e3d2cb5a7a301f023a69a8004e6674a788c3b99a4bbe3d4732fe87cb304ad,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706668770574490584,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 3,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3601bb1252da361bbcfcc520d728a4600c9a7a6709cdb5f0bfbf4e05370abb4,PodSandboxId:1c36a40c0d92c2304f21837c2a64a0c31eb98c22e3c1f70c5c1403b428e8491e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706668768502346788,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dlpzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0ec0b181-2e2d-4e23-9261-d2be8a85e401,},Annotations:map[string]string{io.kubernetes.container.hash: 16e95c81,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6a911b4aeb713ca91d415535a1d1c9ec77496906f110a9201e42fb7f672d5d,PodSandboxId:7b66b8f2892d6b791137abb6e4216c979ee50da9f0d46a094e4e9d42e687aa1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706668766051939574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-skqw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 713e1df7-54be-4322-986d-b6d7db88c1c7,},Annotations:map[string]string{io.kubernetes.container.hash: a02aa8c9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ffbc733f0a7a0b36c0f8cfd8466a4ce2b3e60acde57e63d8b652b8f8ddee90,PodSandboxId:e5e7ce3781dc872e650edd48223cb8e3bf85cdcf4c2ffab6d7424d9d98fb7285,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706668760736263850,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-knvl8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 8e734b81-4d44-4c96-8439-0ef800021bf8,},Annotations:map[string]string{io.kubernetes.container.hash: 519853ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e858bc707d09132b1e518a2c2181736e4c09ce55a58b8aafa6c23ff8692cae,PodSandboxId:9c980583871bf8e3f72388f30383d93dc710168813eb0df1edd1d3a902df3e98,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706668758623385185,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: eaba2b6b-2a00-4af9-bdb8-67d110b3eb19,},Annotations:map[string]string{io.kubernetes.container.hash: 7f6c7235,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6a89f9cac66f16c25935156705417ab09e049b4727fe064703799f2284aea66,PodSandboxId:48e234a4e80a4ecbe53ce69b4f273fc8544e164230b64abf0f2ab2e02cf8c5cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706668758448076767,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85lz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36e014b9-154e-43f4-b694-7f05bd31
baef,},Annotations:map[string]string{io.kubernetes.container.hash: 94f0b9f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c5e3ce0fa5af41914cec39488462d499a693a119c605c12522564e4c7a90f1,PodSandboxId:8e305bc4ffc862e38af72e2c04b7ed2fa29413cdbaa29583dbe7c9a1319c9283,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706668752900665725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cf9b3c171af227e879742789ab79ee,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3464868c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c714a783871b7613686f9fe6e0f214d3eb54ca5ab08f30eb0ad7faa7520e70e7,PodSandboxId:ef8f48bf59c3a1ee8a4a5bdc322a7825e4f8442a7897289eec3440ba48a2806d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706668752732380611,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7320cc932f9ec0e3160c3b0ecdf22c62,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a84020a2b1ba9d04c5f50378538d7cedd4abb2e7a18c1aad04b5a80d54dda31,PodSandboxId:7973fb4e50033d0ced741ad2297e7c32bb8af91a36e345404c7b672850d10a85,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706668752602066872,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e16e470f3c55d41e223486b2026f1d,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99f7756d2ff7480c9783154263e413ddf24b0f60ec440d991b80a2e469285ba6,PodSandboxId:e0a8161c4af5beb7fea249b8e78cacb21ac9d64770bc86fc2458f4a9ed244577,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706668752556022446,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-263108,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d670ff05d0032fcc9ae24f8fc09df250,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2b74cab2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0f88ea44-0d7c-4c9f-b8bf-fa0b0934ac69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e3e3d2cb5a7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       3                   9c980583871bf       storage-provisioner
	a3601bb1252da       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   1c36a40c0d92c       busybox-5b5d89c9d6-dlpzg
	be6a911b4aeb7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   7b66b8f2892d6       coredns-5dd5756b68-skqw4
	47ffbc733f0a7       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   e5e7ce3781dc8       kindnet-knvl8
	19e858bc707d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       2                   9c980583871bf       storage-provisioner
	a6a89f9cac66f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   48e234a4e80a4       kube-proxy-x85lz
	d1c5e3ce0fa5a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   8e305bc4ffc86       etcd-multinode-263108
	c714a783871b7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   ef8f48bf59c3a       kube-scheduler-multinode-263108
	4a84020a2b1ba       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   7973fb4e50033       kube-controller-manager-multinode-263108
	99f7756d2ff74       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   e0a8161c4af5b       kube-apiserver-multinode-263108
	
	
	==> coredns [be6a911b4aeb713ca91d415535a1d1c9ec77496906f110a9201e42fb7f672d5d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49994 - 17544 "HINFO IN 5820428394452683903.194136402040412345. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007236503s
	
	
	==> describe nodes <==
	Name:               multinode-263108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-263108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=multinode-263108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T02_28_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 02:28:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-263108
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 02:43:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 02:39:48 +0000   Wed, 31 Jan 2024 02:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 02:39:48 +0000   Wed, 31 Jan 2024 02:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 02:39:48 +0000   Wed, 31 Jan 2024 02:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 02:39:48 +0000   Wed, 31 Jan 2024 02:39:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    multinode-263108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 075a8d1fcf194b46b480cbbd7d5a2aff
	  System UUID:                075a8d1f-cf19-4b46-b480-cbbd7d5a2aff
	  Boot ID:                    89f463df-cba0-45e9-9b7b-721f0c04c1ec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dlpzg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-skqw4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-263108                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-knvl8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-263108             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-263108    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-x85lz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-263108             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-263108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-263108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-263108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-263108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-263108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-263108 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-263108 event: Registered Node multinode-263108 in Controller
	  Normal  NodeReady                14m                    kubelet          Node multinode-263108 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-263108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-263108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-263108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-263108 event: Registered Node multinode-263108 in Controller
	
	
	Name:               multinode-263108-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-263108-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=multinode-263108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_31T02_42_56_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 02:41:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-263108-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 02:42:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 02:41:13 +0000   Wed, 31 Jan 2024 02:41:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 02:41:13 +0000   Wed, 31 Jan 2024 02:41:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 02:41:13 +0000   Wed, 31 Jan 2024 02:41:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 02:41:13 +0000   Wed, 31 Jan 2024 02:41:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.60
	  Hostname:    multinode-263108-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f10c3acb8144c48b5f4d43294b1ad43
	  System UUID:                0f10c3ac-b814-4c48-b5f4-d43294b1ad43
	  Boot ID:                    98ff0ad5-b5b2-4e7e-8d96-e7fb181bcbd3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-hhtrb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-zvrh5               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-x5jb7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 106s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-263108-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-263108-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-263108-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-263108-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                kubelet     Node multinode-263108-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m9s (x2 over 3m9s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       112s                 kubelet     Node multinode-263108-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 109s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  109s (x2 over 109s)  kubelet     Node multinode-263108-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s (x2 over 109s)  kubelet     Node multinode-263108-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s (x2 over 109s)  kubelet     Node multinode-263108-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  109s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                 kubelet     Node multinode-263108-m02 status is now: NodeReady
	
	
	Name:               multinode-263108-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-263108-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=multinode-263108
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_31T02_42_56_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 02:42:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-263108-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 02:42:56 +0000   Wed, 31 Jan 2024 02:42:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 02:42:56 +0000   Wed, 31 Jan 2024 02:42:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 02:42:56 +0000   Wed, 31 Jan 2024 02:42:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 02:42:56 +0000   Wed, 31 Jan 2024 02:42:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    multinode-263108-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 04b1a6032ded43b6803dab0df32f9ca7
	  System UUID:                04b1a603-2ded-43b6-803d-ab0df32f9ca7
	  Boot ID:                    3edfe78c-11e1-4c2b-aa77-2e45fc9d8a24
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ft7n7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kindnet-88m7n               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-mpxjh            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-263108-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-263108-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-263108-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-263108-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-263108-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-263108-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-263108-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             71s                kubelet     Node multinode-263108-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        38s (x2 over 98s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeReady                9s (x2 over 11m)   kubelet     Node multinode-263108-m03 status is now: NodeReady
	  Normal   Starting                 6s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    6s (x2 over 6s)    kubelet     Node multinode-263108-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6s (x2 over 6s)    kubelet     Node multinode-263108-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6s (x2 over 6s)    kubelet     Node multinode-263108-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                5s                 kubelet     Node multinode-263108-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan31 02:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062532] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.283999] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.672041] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153909] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.409297] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.127088] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.096249] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.139236] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.101534] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.212409] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Jan31 02:39] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	
	
	==> etcd [d1c5e3ce0fa5af41914cec39488462d499a693a119c605c12522564e4c7a90f1] <==
	{"level":"info","ts":"2024-01-31T02:39:14.5171Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T02:39:14.51968Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-31T02:39:14.525136Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"22872ffef731375a","initial-advertise-peer-urls":["https://192.168.39.109:2380"],"listen-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.109:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T02:39:14.525204Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T02:39:14.525403Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2024-01-31T02:39:14.525431Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2024-01-31T02:39:14.525789Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-31T02:39:14.526163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a switched to configuration voters=(2488010091260884826)"}
	{"level":"info","ts":"2024-01-31T02:39:14.526231Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"70942a38564785b0","local-member-id":"22872ffef731375a","added-peer-id":"22872ffef731375a","added-peer-peer-urls":["https://192.168.39.109:2380"]}
	{"level":"info","ts":"2024-01-31T02:39:14.52637Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"70942a38564785b0","local-member-id":"22872ffef731375a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T02:39:14.526408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T02:39:15.964488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-31T02:39:15.964621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-31T02:39:15.964686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a received MsgPreVoteResp from 22872ffef731375a at term 2"}
	{"level":"info","ts":"2024-01-31T02:39:15.964735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a became candidate at term 3"}
	{"level":"info","ts":"2024-01-31T02:39:15.964761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a received MsgVoteResp from 22872ffef731375a at term 3"}
	{"level":"info","ts":"2024-01-31T02:39:15.964788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"22872ffef731375a became leader at term 3"}
	{"level":"info","ts":"2024-01-31T02:39:15.964865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 22872ffef731375a elected leader 22872ffef731375a at term 3"}
	{"level":"info","ts":"2024-01-31T02:39:15.966421Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"22872ffef731375a","local-member-attributes":"{Name:multinode-263108 ClientURLs:[https://192.168.39.109:2379]}","request-path":"/0/members/22872ffef731375a/attributes","cluster-id":"70942a38564785b0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T02:39:15.966493Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T02:39:15.966884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T02:39:15.96693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T02:39:15.966542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T02:39:15.968562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.109:2379"}
	{"level":"info","ts":"2024-01-31T02:39:15.968562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:43:01 up 4 min,  0 users,  load average: 0.15, 0.11, 0.05
	Linux multinode-263108 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [47ffbc733f0a7a0b36c0f8cfd8466a4ce2b3e60acde57e63d8b652b8f8ddee90] <==
	I0131 02:42:12.244437       1 main.go:250] Node multinode-263108-m03 has CIDR [10.244.3.0/24] 
	I0131 02:42:22.292553       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0131 02:42:22.292599       1 main.go:227] handling current node
	I0131 02:42:22.292611       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0131 02:42:22.292617       1 main.go:250] Node multinode-263108-m02 has CIDR [10.244.1.0/24] 
	I0131 02:42:22.292719       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0131 02:42:22.292748       1 main.go:250] Node multinode-263108-m03 has CIDR [10.244.3.0/24] 
	I0131 02:42:32.305471       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0131 02:42:32.305533       1 main.go:227] handling current node
	I0131 02:42:32.305549       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0131 02:42:32.305557       1 main.go:250] Node multinode-263108-m02 has CIDR [10.244.1.0/24] 
	I0131 02:42:32.305672       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0131 02:42:32.305680       1 main.go:250] Node multinode-263108-m03 has CIDR [10.244.3.0/24] 
	I0131 02:42:42.310619       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0131 02:42:42.310767       1 main.go:227] handling current node
	I0131 02:42:42.310794       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0131 02:42:42.310875       1 main.go:250] Node multinode-263108-m02 has CIDR [10.244.1.0/24] 
	I0131 02:42:42.310998       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0131 02:42:42.311020       1 main.go:250] Node multinode-263108-m03 has CIDR [10.244.3.0/24] 
	I0131 02:42:52.338175       1 main.go:223] Handling node with IPs: map[192.168.39.109:{}]
	I0131 02:42:52.341750       1 main.go:227] handling current node
	I0131 02:42:52.341788       1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
	I0131 02:42:52.341870       1 main.go:250] Node multinode-263108-m02 has CIDR [10.244.1.0/24] 
	I0131 02:42:52.342001       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0131 02:42:52.342024       1 main.go:250] Node multinode-263108-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [99f7756d2ff7480c9783154263e413ddf24b0f60ec440d991b80a2e469285ba6] <==
	I0131 02:39:17.249368       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0131 02:39:17.249378       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0131 02:39:17.249388       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0131 02:39:17.249411       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0131 02:39:17.249468       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0131 02:39:17.368084       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0131 02:39:17.380480       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0131 02:39:17.381260       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0131 02:39:17.381294       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0131 02:39:17.381352       1 shared_informer.go:318] Caches are synced for configmaps
	I0131 02:39:17.381414       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0131 02:39:17.381452       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0131 02:39:17.391537       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0131 02:39:17.445754       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0131 02:39:17.445934       1 aggregator.go:166] initial CRD sync complete...
	I0131 02:39:17.446024       1 autoregister_controller.go:141] Starting autoregister controller
	I0131 02:39:17.446049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0131 02:39:17.446073       1 cache.go:39] Caches are synced for autoregister controller
	I0131 02:39:18.240496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0131 02:39:19.990575       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0131 02:39:20.156272       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0131 02:39:20.166456       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0131 02:39:20.240568       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0131 02:39:20.251752       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0131 02:39:48.099610       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4a84020a2b1ba9d04c5f50378538d7cedd4abb2e7a18c1aad04b5a80d54dda31] <==
	I0131 02:41:13.040337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="5.269044ms"
	I0131 02:41:13.040440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.92µs"
	I0131 02:41:13.317157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-263108-m02"
	I0131 02:41:13.881469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="286.224µs"
	I0131 02:41:27.153133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="341.849µs"
	I0131 02:41:27.735528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="82.895µs"
	I0131 02:41:27.741235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="107.04µs"
	I0131 02:41:50.818377       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-263108-m02"
	I0131 02:42:50.605436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="213.554µs"
	I0131 02:42:52.054118       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-263108-m02"
	I0131 02:42:52.320850       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-hhtrb"
	I0131 02:42:52.333035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="27.529385ms"
	I0131 02:42:52.353540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="20.361705ms"
	I0131 02:42:52.353694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.133µs"
	I0131 02:42:52.353855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="23.83µs"
	I0131 02:42:52.367268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.017µs"
	I0131 02:42:53.992640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="5.450055ms"
	I0131 02:42:53.992726       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.069µs"
	I0131 02:42:54.761636       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ft7n7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ft7n7"
	I0131 02:42:55.325740       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-263108-m02"
	I0131 02:42:55.939040       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-263108-m03\" does not exist"
	I0131 02:42:55.939640       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-263108-m02"
	I0131 02:42:55.957973       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-263108-m03" podCIDRs=["10.244.2.0/24"]
	I0131 02:42:56.285411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-263108-m02"
	I0131 02:42:56.848976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="76.94µs"
	
	
	==> kube-proxy [a6a89f9cac66f16c25935156705417ab09e049b4727fe064703799f2284aea66] <==
	I0131 02:39:18.807232       1 server_others.go:69] "Using iptables proxy"
	I0131 02:39:18.821194       1 node.go:141] Successfully retrieved node IP: 192.168.39.109
	I0131 02:39:18.865197       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 02:39:18.865286       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 02:39:18.867642       1 server_others.go:152] "Using iptables Proxier"
	I0131 02:39:18.867703       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 02:39:18.868074       1 server.go:846] "Version info" version="v1.28.4"
	I0131 02:39:18.868102       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 02:39:18.869262       1 config.go:188] "Starting service config controller"
	I0131 02:39:18.869292       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 02:39:18.869312       1 config.go:97] "Starting endpoint slice config controller"
	I0131 02:39:18.869315       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 02:39:18.869652       1 config.go:315] "Starting node config controller"
	I0131 02:39:18.869657       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 02:39:18.969759       1 shared_informer.go:318] Caches are synced for node config
	I0131 02:39:18.969903       1 shared_informer.go:318] Caches are synced for service config
	I0131 02:39:18.969961       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c714a783871b7613686f9fe6e0f214d3eb54ca5ab08f30eb0ad7faa7520e70e7] <==
	I0131 02:39:15.400969       1 serving.go:348] Generated self-signed cert in-memory
	W0131 02:39:17.329039       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0131 02:39:17.329252       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 02:39:17.329287       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0131 02:39:17.329373       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0131 02:39:17.397210       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0131 02:39:17.397254       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 02:39:17.401657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0131 02:39:17.401897       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0131 02:39:17.401913       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 02:39:17.401998       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0131 02:39:17.503019       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 02:38:44 UTC, ends at Wed 2024-01-31 02:43:01 UTC. --
	Jan 31 02:39:19 multinode-263108 kubelet[919]: E0131 02:39:19.546873     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-dlpzg" podUID="0ec0b181-2e2d-4e23-9261-d2be8a85e401"
	Jan 31 02:39:19 multinode-263108 kubelet[919]: I0131 02:39:19.621083     919 scope.go:117] "RemoveContainer" containerID="ed7d372b7a59fdad14fa7df41947576f9b7f277a3b134650950d0525988be4a2"
	Jan 31 02:39:19 multinode-263108 kubelet[919]: I0131 02:39:19.621431     919 scope.go:117] "RemoveContainer" containerID="19e858bc707d09132b1e518a2c2181736e4c09ce55a58b8aafa6c23ff8692cae"
	Jan 31 02:39:19 multinode-263108 kubelet[919]: E0131 02:39:19.621584     919 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eaba2b6b-2a00-4af9-bdb8-67d110b3eb19)\"" pod="kube-system/storage-provisioner" podUID="eaba2b6b-2a00-4af9-bdb8-67d110b3eb19"
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.174480     919 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.174556     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/713e1df7-54be-4322-986d-b6d7db88c1c7-config-volume podName:713e1df7-54be-4322-986d-b6d7db88c1c7 nodeName:}" failed. No retries permitted until 2024-01-31 02:39:25.174541352 +0000 UTC m=+13.877628628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/713e1df7-54be-4322-986d-b6d7db88c1c7-config-volume") pod "coredns-5dd5756b68-skqw4" (UID: "713e1df7-54be-4322-986d-b6d7db88c1c7") : object "kube-system"/"coredns" not registered
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.275473     919 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.275507     919 projected.go:198] Error preparing data for projected volume kube-api-access-qlkkz for pod default/busybox-5b5d89c9d6-dlpzg: object "default"/"kube-root-ca.crt" not registered
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.275552     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ec0b181-2e2d-4e23-9261-d2be8a85e401-kube-api-access-qlkkz podName:0ec0b181-2e2d-4e23-9261-d2be8a85e401 nodeName:}" failed. No retries permitted until 2024-01-31 02:39:25.275537881 +0000 UTC m=+13.978625160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qlkkz" (UniqueName: "kubernetes.io/projected/0ec0b181-2e2d-4e23-9261-d2be8a85e401-kube-api-access-qlkkz") pod "busybox-5b5d89c9d6-dlpzg" (UID: "0ec0b181-2e2d-4e23-9261-d2be8a85e401") : object "default"/"kube-root-ca.crt" not registered
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.546301     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-dlpzg" podUID="0ec0b181-2e2d-4e23-9261-d2be8a85e401"
	Jan 31 02:39:21 multinode-263108 kubelet[919]: E0131 02:39:21.547732     919 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-skqw4" podUID="713e1df7-54be-4322-986d-b6d7db88c1c7"
	Jan 31 02:39:22 multinode-263108 kubelet[919]: I0131 02:39:22.475649     919 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 31 02:39:30 multinode-263108 kubelet[919]: I0131 02:39:30.545728     919 scope.go:117] "RemoveContainer" containerID="19e858bc707d09132b1e518a2c2181736e4c09ce55a58b8aafa6c23ff8692cae"
	Jan 31 02:40:11 multinode-263108 kubelet[919]: E0131 02:40:11.559310     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 02:40:11 multinode-263108 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 02:40:11 multinode-263108 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 02:40:11 multinode-263108 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 02:41:11 multinode-263108 kubelet[919]: E0131 02:41:11.558156     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 02:41:11 multinode-263108 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 02:41:11 multinode-263108 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 02:41:11 multinode-263108 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 02:42:11 multinode-263108 kubelet[919]: E0131 02:42:11.559308     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 02:42:11 multinode-263108 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 02:42:11 multinode-263108 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 02:42:11 multinode-263108 kubelet[919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-263108 -n multinode-263108
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-263108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (687.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 stop
E0131 02:43:38.353098 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263108 stop: exit status 82 (2m0.297581116s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-263108"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-263108 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263108 status: exit status 3 (18.83231185s)

                                                
                                                
-- stdout --
	multinode-263108
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-263108-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 02:45:23.390947 1439042 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host
	E0131 02:45:23.391009 1439042 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-263108 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-263108 -n multinode-263108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-263108 -n multinode-263108: exit status 3 (3.19299222s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 02:45:26.750911 1439146 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host
	E0131 02:45:26.750938 1439146 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.109:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-263108" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.32s)

                                                
                                    
x
+
TestPreload (218.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-723521 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0131 02:55:30.923850 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-723521 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m2.243790322s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-723521 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-723521 image pull gcr.io/k8s-minikube/busybox: (2.666833519s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-723521
E0131 02:55:51.555180 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-723521: (7.112976994s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-723521 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-723521 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.496058525s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-723521 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2024-01-31 02:57:19.15973726 +0000 UTC m=+3197.801085073
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-723521 -n test-preload-723521
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-723521 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-723521 logs -n 25: (1.127512034s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n multinode-263108 sudo cat                                       | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /home/docker/cp-test_multinode-263108-m03_multinode-263108.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt                       | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m02:/home/docker/cp-test_multinode-263108-m03_multinode-263108-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n                                                                 | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | multinode-263108-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-263108 ssh -n multinode-263108-m02 sudo cat                                   | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | /home/docker/cp-test_multinode-263108-m03_multinode-263108-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-263108 node stop m03                                                          | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	| node    | multinode-263108 node start                                                             | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC | 31 Jan 24 02:31 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-263108                                                                | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC |                     |
	| stop    | -p multinode-263108                                                                     | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:31 UTC |                     |
	| start   | -p multinode-263108                                                                     | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:33 UTC | 31 Jan 24 02:42 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-263108                                                                | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:42 UTC |                     |
	| node    | multinode-263108 node delete                                                            | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:43 UTC | 31 Jan 24 02:43 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-263108 stop                                                                   | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:43 UTC |                     |
	| start   | -p multinode-263108                                                                     | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:45 UTC | 31 Jan 24 02:52 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-263108                                                                | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:52 UTC |                     |
	| start   | -p multinode-263108-m02                                                                 | multinode-263108-m02 | jenkins | v1.32.0 | 31 Jan 24 02:52 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-263108-m03                                                                 | multinode-263108-m03 | jenkins | v1.32.0 | 31 Jan 24 02:52 UTC | 31 Jan 24 02:53 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-263108                                                                 | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:53 UTC |                     |
	| delete  | -p multinode-263108-m03                                                                 | multinode-263108-m03 | jenkins | v1.32.0 | 31 Jan 24 02:53 UTC | 31 Jan 24 02:53 UTC |
	| delete  | -p multinode-263108                                                                     | multinode-263108     | jenkins | v1.32.0 | 31 Jan 24 02:53 UTC | 31 Jan 24 02:53 UTC |
	| start   | -p test-preload-723521                                                                  | test-preload-723521  | jenkins | v1.32.0 | 31 Jan 24 02:53 UTC | 31 Jan 24 02:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-723521 image pull                                                          | test-preload-723521  | jenkins | v1.32.0 | 31 Jan 24 02:55 UTC | 31 Jan 24 02:55 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-723521                                                                  | test-preload-723521  | jenkins | v1.32.0 | 31 Jan 24 02:55 UTC | 31 Jan 24 02:55 UTC |
	| start   | -p test-preload-723521                                                                  | test-preload-723521  | jenkins | v1.32.0 | 31 Jan 24 02:55 UTC | 31 Jan 24 02:57 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-723521 image list                                                          | test-preload-723521  | jenkins | v1.32.0 | 31 Jan 24 02:57 UTC | 31 Jan 24 02:57 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:55:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:55:55.481267 1441872 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:55:55.481539 1441872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:55:55.481549 1441872 out.go:309] Setting ErrFile to fd 2...
	I0131 02:55:55.481553 1441872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:55:55.481758 1441872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:55:55.482301 1441872 out.go:303] Setting JSON to false
	I0131 02:55:55.483353 1441872 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27499,"bootTime":1706642257,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:55:55.483417 1441872 start.go:138] virtualization: kvm guest
	I0131 02:55:55.485974 1441872 out.go:177] * [test-preload-723521] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:55:55.487491 1441872 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 02:55:55.487549 1441872 notify.go:220] Checking for updates...
	I0131 02:55:55.488976 1441872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:55:55.490408 1441872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:55:55.491757 1441872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:55:55.493279 1441872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 02:55:55.494924 1441872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 02:55:55.496998 1441872 config.go:182] Loaded profile config "test-preload-723521": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0131 02:55:55.497739 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:55:55.497803 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:55:55.512682 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0131 02:55:55.513211 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:55:55.513741 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:55:55.513769 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:55:55.514169 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:55:55.514385 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:55:55.516250 1441872 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0131 02:55:55.517777 1441872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:55:55.518223 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:55:55.518273 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:55:55.532766 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0131 02:55:55.533232 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:55:55.533864 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:55:55.533896 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:55:55.534314 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:55:55.534521 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:55:55.570529 1441872 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 02:55:55.571808 1441872 start.go:298] selected driver: kvm2
	I0131 02:55:55.571823 1441872 start.go:902] validating driver "kvm2" against &{Name:test-preload-723521 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-723521 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:55:55.571954 1441872 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 02:55:55.572661 1441872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:55:55.572759 1441872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:55:55.588465 1441872 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:55:55.588850 1441872 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 02:55:55.588937 1441872 cni.go:84] Creating CNI manager for ""
	I0131 02:55:55.588956 1441872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:55:55.588975 1441872 start_flags.go:321] config:
	{Name:test-preload-723521 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-723521 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:55:55.589198 1441872 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:55:55.591240 1441872 out.go:177] * Starting control plane node test-preload-723521 in cluster test-preload-723521
	I0131 02:55:55.592726 1441872 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0131 02:55:55.908390 1441872 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0131 02:55:55.908418 1441872 cache.go:56] Caching tarball of preloaded images
	I0131 02:55:55.908578 1441872 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0131 02:55:55.910469 1441872 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0131 02:55:55.911729 1441872 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:55:56.013012 1441872 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0131 02:56:09.765121 1441872 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:56:09.765227 1441872 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:56:10.809533 1441872 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0131 02:56:10.809674 1441872 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/config.json ...
	I0131 02:56:10.809917 1441872 start.go:365] acquiring machines lock for test-preload-723521: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 02:56:10.809994 1441872 start.go:369] acquired machines lock for "test-preload-723521" in 47.072µs
	I0131 02:56:10.810009 1441872 start.go:96] Skipping create...Using existing machine configuration
	I0131 02:56:10.810034 1441872 fix.go:54] fixHost starting: 
	I0131 02:56:10.810325 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:56:10.810366 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:56:10.825106 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0131 02:56:10.825627 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:56:10.826081 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:56:10.826105 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:56:10.826499 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:56:10.826704 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:10.826884 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetState
	I0131 02:56:10.828653 1441872 fix.go:102] recreateIfNeeded on test-preload-723521: state=Stopped err=<nil>
	I0131 02:56:10.828676 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	W0131 02:56:10.828843 1441872 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 02:56:10.831853 1441872 out.go:177] * Restarting existing kvm2 VM for "test-preload-723521" ...
	I0131 02:56:10.833314 1441872 main.go:141] libmachine: (test-preload-723521) Calling .Start
	I0131 02:56:10.833495 1441872 main.go:141] libmachine: (test-preload-723521) Ensuring networks are active...
	I0131 02:56:10.834342 1441872 main.go:141] libmachine: (test-preload-723521) Ensuring network default is active
	I0131 02:56:10.834755 1441872 main.go:141] libmachine: (test-preload-723521) Ensuring network mk-test-preload-723521 is active
	I0131 02:56:10.835105 1441872 main.go:141] libmachine: (test-preload-723521) Getting domain xml...
	I0131 02:56:10.835966 1441872 main.go:141] libmachine: (test-preload-723521) Creating domain...
	I0131 02:56:12.025641 1441872 main.go:141] libmachine: (test-preload-723521) Waiting to get IP...
	I0131 02:56:12.026630 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:12.027048 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:12.027197 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:12.027030 1441940 retry.go:31] will retry after 234.397154ms: waiting for machine to come up
	I0131 02:56:12.263751 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:12.264197 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:12.264230 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:12.264153 1441940 retry.go:31] will retry after 341.803623ms: waiting for machine to come up
	I0131 02:56:12.607775 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:12.608228 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:12.608253 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:12.608180 1441940 retry.go:31] will retry after 430.834409ms: waiting for machine to come up
	I0131 02:56:13.040912 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:13.041376 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:13.041403 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:13.041310 1441940 retry.go:31] will retry after 559.358741ms: waiting for machine to come up
	I0131 02:56:13.602032 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:13.602500 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:13.602537 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:13.602426 1441940 retry.go:31] will retry after 475.081224ms: waiting for machine to come up
	I0131 02:56:14.079266 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:14.079786 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:14.079813 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:14.079729 1441940 retry.go:31] will retry after 779.499339ms: waiting for machine to come up
	I0131 02:56:14.860650 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:14.861208 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:14.861242 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:14.861150 1441940 retry.go:31] will retry after 721.652761ms: waiting for machine to come up
	I0131 02:56:15.584171 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:15.584581 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:15.584616 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:15.584527 1441940 retry.go:31] will retry after 1.346605183s: waiting for machine to come up
	I0131 02:56:16.932624 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:16.933100 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:16.933130 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:16.933048 1441940 retry.go:31] will retry after 1.564932008s: waiting for machine to come up
	I0131 02:56:18.499691 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:18.500111 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:18.500141 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:18.500052 1441940 retry.go:31] will retry after 2.169445786s: waiting for machine to come up
	I0131 02:56:20.670899 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:20.671277 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:20.671330 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:20.671246 1441940 retry.go:31] will retry after 2.87983176s: waiting for machine to come up
	I0131 02:56:23.552212 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:23.552712 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:23.552745 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:23.552650 1441940 retry.go:31] will retry after 3.623649405s: waiting for machine to come up
	I0131 02:56:27.177937 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:27.178444 1441872 main.go:141] libmachine: (test-preload-723521) DBG | unable to find current IP address of domain test-preload-723521 in network mk-test-preload-723521
	I0131 02:56:27.178505 1441872 main.go:141] libmachine: (test-preload-723521) DBG | I0131 02:56:27.178422 1441940 retry.go:31] will retry after 3.148645788s: waiting for machine to come up
	I0131 02:56:30.330958 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.331395 1441872 main.go:141] libmachine: (test-preload-723521) Found IP for machine: 192.168.39.101
	I0131 02:56:30.331422 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has current primary IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.331428 1441872 main.go:141] libmachine: (test-preload-723521) Reserving static IP address...
	I0131 02:56:30.331923 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "test-preload-723521", mac: "52:54:00:4e:7a:51", ip: "192.168.39.101"} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.331950 1441872 main.go:141] libmachine: (test-preload-723521) DBG | skip adding static IP to network mk-test-preload-723521 - found existing host DHCP lease matching {name: "test-preload-723521", mac: "52:54:00:4e:7a:51", ip: "192.168.39.101"}
	I0131 02:56:30.331962 1441872 main.go:141] libmachine: (test-preload-723521) Reserved static IP address: 192.168.39.101
	I0131 02:56:30.331973 1441872 main.go:141] libmachine: (test-preload-723521) Waiting for SSH to be available...
	I0131 02:56:30.331981 1441872 main.go:141] libmachine: (test-preload-723521) DBG | Getting to WaitForSSH function...
	I0131 02:56:30.334095 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.334412 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.334438 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.334641 1441872 main.go:141] libmachine: (test-preload-723521) DBG | Using SSH client type: external
	I0131 02:56:30.334701 1441872 main.go:141] libmachine: (test-preload-723521) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa (-rw-------)
	I0131 02:56:30.334757 1441872 main.go:141] libmachine: (test-preload-723521) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.101 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 02:56:30.334785 1441872 main.go:141] libmachine: (test-preload-723521) DBG | About to run SSH command:
	I0131 02:56:30.334798 1441872 main.go:141] libmachine: (test-preload-723521) DBG | exit 0
	I0131 02:56:30.418192 1441872 main.go:141] libmachine: (test-preload-723521) DBG | SSH cmd err, output: <nil>: 
	I0131 02:56:30.418690 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetConfigRaw
	I0131 02:56:30.419414 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetIP
	I0131 02:56:30.422081 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.422419 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.422452 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.422752 1441872 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/config.json ...
	I0131 02:56:30.423020 1441872 machine.go:88] provisioning docker machine ...
	I0131 02:56:30.423046 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:30.423327 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetMachineName
	I0131 02:56:30.423486 1441872 buildroot.go:166] provisioning hostname "test-preload-723521"
	I0131 02:56:30.423512 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetMachineName
	I0131 02:56:30.423671 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:30.426093 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.426388 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.426419 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.426637 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:30.426862 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.427028 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.427179 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:30.427336 1441872 main.go:141] libmachine: Using SSH client type: native
	I0131 02:56:30.427683 1441872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0131 02:56:30.427702 1441872 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-723521 && echo "test-preload-723521" | sudo tee /etc/hostname
	I0131 02:56:30.546432 1441872 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-723521
	
	I0131 02:56:30.546502 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:30.549541 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.549911 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.549953 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.550142 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:30.550368 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.550546 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.550723 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:30.550926 1441872 main.go:141] libmachine: Using SSH client type: native
	I0131 02:56:30.551248 1441872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0131 02:56:30.551266 1441872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-723521' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-723521/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-723521' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 02:56:30.666824 1441872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 02:56:30.666861 1441872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 02:56:30.666883 1441872 buildroot.go:174] setting up certificates
	I0131 02:56:30.666898 1441872 provision.go:83] configureAuth start
	I0131 02:56:30.666908 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetMachineName
	I0131 02:56:30.667258 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetIP
	I0131 02:56:30.670283 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.670721 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.670760 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.670889 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:30.673352 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.673629 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.673667 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.673781 1441872 provision.go:138] copyHostCerts
	I0131 02:56:30.673836 1441872 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 02:56:30.673846 1441872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 02:56:30.673914 1441872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 02:56:30.674018 1441872 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 02:56:30.674029 1441872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 02:56:30.674054 1441872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 02:56:30.674145 1441872 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 02:56:30.674154 1441872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 02:56:30.674178 1441872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 02:56:30.674228 1441872 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.test-preload-723521 san=[192.168.39.101 192.168.39.101 localhost 127.0.0.1 minikube test-preload-723521]
	I0131 02:56:30.771337 1441872 provision.go:172] copyRemoteCerts
	I0131 02:56:30.771400 1441872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 02:56:30.771425 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:30.774276 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.774746 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.774776 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.775013 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:30.775287 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.775463 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:30.775678 1441872 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa Username:docker}
	I0131 02:56:30.859065 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 02:56:30.880665 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0131 02:56:30.902227 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 02:56:30.923898 1441872 provision.go:86] duration metric: configureAuth took 256.983793ms
	I0131 02:56:30.923939 1441872 buildroot.go:189] setting minikube options for container-runtime
	I0131 02:56:30.924152 1441872 config.go:182] Loaded profile config "test-preload-723521": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0131 02:56:30.924243 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:30.927031 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.927429 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:30.927459 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:30.927639 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:30.927874 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.928155 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:30.928287 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:30.928501 1441872 main.go:141] libmachine: Using SSH client type: native
	I0131 02:56:30.928908 1441872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0131 02:56:30.928930 1441872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 02:56:31.212068 1441872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 02:56:31.212106 1441872 machine.go:91] provisioned docker machine in 789.067749ms
	I0131 02:56:31.212123 1441872 start.go:300] post-start starting for "test-preload-723521" (driver="kvm2")
	I0131 02:56:31.212139 1441872 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 02:56:31.212163 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:31.212487 1441872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 02:56:31.212521 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:31.215160 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.215542 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:31.215573 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.215764 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:31.216004 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:31.216264 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:31.216406 1441872 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa Username:docker}
	I0131 02:56:31.301011 1441872 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 02:56:31.305027 1441872 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 02:56:31.305054 1441872 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 02:56:31.305139 1441872 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 02:56:31.305258 1441872 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 02:56:31.305374 1441872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 02:56:31.314045 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:56:31.338171 1441872 start.go:303] post-start completed in 126.031588ms
	I0131 02:56:31.338201 1441872 fix.go:56] fixHost completed within 20.528185746s
	I0131 02:56:31.338225 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:31.340867 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.341173 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:31.341207 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.341375 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:31.341605 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:31.341833 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:31.342020 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:31.342214 1441872 main.go:141] libmachine: Using SSH client type: native
	I0131 02:56:31.342545 1441872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I0131 02:56:31.342557 1441872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 02:56:31.451072 1441872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706669791.403539554
	
	I0131 02:56:31.451102 1441872 fix.go:206] guest clock: 1706669791.403539554
	I0131 02:56:31.451113 1441872 fix.go:219] Guest: 2024-01-31 02:56:31.403539554 +0000 UTC Remote: 2024-01-31 02:56:31.338205778 +0000 UTC m=+35.908521164 (delta=65.333776ms)
	I0131 02:56:31.451140 1441872 fix.go:190] guest clock delta is within tolerance: 65.333776ms
	I0131 02:56:31.451145 1441872 start.go:83] releasing machines lock for "test-preload-723521", held for 20.641140997s
	I0131 02:56:31.451164 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:31.451502 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetIP
	I0131 02:56:31.454542 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.454973 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:31.455006 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.455278 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:31.455951 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:31.456207 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:56:31.456330 1441872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 02:56:31.456388 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:31.456461 1441872 ssh_runner.go:195] Run: cat /version.json
	I0131 02:56:31.456492 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:56:31.459542 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.459813 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.459945 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:31.459975 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.460073 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:31.460162 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:31.460194 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:31.460269 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:31.460343 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:56:31.460430 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:31.460529 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:56:31.460593 1441872 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa Username:docker}
	I0131 02:56:31.460748 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:56:31.460898 1441872 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa Username:docker}
	I0131 02:56:31.571694 1441872 ssh_runner.go:195] Run: systemctl --version
	I0131 02:56:31.577334 1441872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 02:56:31.716373 1441872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 02:56:31.722603 1441872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 02:56:31.722665 1441872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 02:56:31.736284 1441872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 02:56:31.736311 1441872 start.go:475] detecting cgroup driver to use...
	I0131 02:56:31.736373 1441872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 02:56:31.752444 1441872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 02:56:31.764697 1441872 docker.go:217] disabling cri-docker service (if available) ...
	I0131 02:56:31.764770 1441872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 02:56:31.777428 1441872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 02:56:31.790552 1441872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 02:56:31.897219 1441872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 02:56:32.015370 1441872 docker.go:233] disabling docker service ...
	I0131 02:56:32.015468 1441872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 02:56:32.028412 1441872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 02:56:32.040163 1441872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 02:56:32.145692 1441872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 02:56:32.244991 1441872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 02:56:32.257539 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 02:56:32.273710 1441872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0131 02:56:32.273796 1441872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:56:32.283234 1441872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 02:56:32.283304 1441872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:56:32.292843 1441872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:56:32.302297 1441872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 02:56:32.311598 1441872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 02:56:32.321188 1441872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 02:56:32.329838 1441872 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 02:56:32.329910 1441872 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 02:56:32.342465 1441872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 02:56:32.351246 1441872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 02:56:32.445299 1441872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 02:56:32.597291 1441872 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 02:56:32.597371 1441872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 02:56:32.601853 1441872 start.go:543] Will wait 60s for crictl version
	I0131 02:56:32.601929 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:32.607777 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 02:56:32.640894 1441872 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 02:56:32.640995 1441872 ssh_runner.go:195] Run: crio --version
	I0131 02:56:32.685925 1441872 ssh_runner.go:195] Run: crio --version
	I0131 02:56:32.734082 1441872 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0131 02:56:32.735477 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetIP
	I0131 02:56:32.738468 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:32.738901 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:56:32.738935 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:56:32.739172 1441872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 02:56:32.743165 1441872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:56:32.754827 1441872 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0131 02:56:32.754917 1441872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:56:32.790932 1441872 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0131 02:56:32.791010 1441872 ssh_runner.go:195] Run: which lz4
	I0131 02:56:32.794683 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 02:56:32.798474 1441872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 02:56:32.798522 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0131 02:56:34.406796 1441872 crio.go:444] Took 1.612146 seconds to copy over tarball
	I0131 02:56:34.406865 1441872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 02:56:37.055137 1441872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.648244242s)
	I0131 02:56:37.055180 1441872 crio.go:451] Took 2.648357 seconds to extract the tarball
	I0131 02:56:37.055190 1441872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 02:56:37.095212 1441872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 02:56:37.137378 1441872 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0131 02:56:37.137406 1441872 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 02:56:37.137471 1441872 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:56:37.137500 1441872 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0131 02:56:37.137531 1441872 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0131 02:56:37.137553 1441872 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0131 02:56:37.137603 1441872 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0131 02:56:37.137608 1441872 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0131 02:56:37.137617 1441872 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0131 02:56:37.137634 1441872 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0131 02:56:37.139156 1441872 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0131 02:56:37.139157 1441872 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0131 02:56:37.139181 1441872 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:56:37.139156 1441872 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0131 02:56:37.139157 1441872 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0131 02:56:37.139157 1441872 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0131 02:56:37.139164 1441872 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0131 02:56:37.139467 1441872 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0131 02:56:37.347275 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0131 02:56:37.354714 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0131 02:56:37.364335 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0131 02:56:37.366466 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0131 02:56:37.372834 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0131 02:56:37.397537 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0131 02:56:37.399924 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0131 02:56:37.442787 1441872 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0131 02:56:37.442834 1441872 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0131 02:56:37.442877 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.455694 1441872 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0131 02:56:37.455745 1441872 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0131 02:56:37.455801 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.490182 1441872 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0131 02:56:37.490237 1441872 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0131 02:56:37.490290 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.504250 1441872 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0131 02:56:37.504305 1441872 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0131 02:56:37.504337 1441872 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0131 02:56:37.504367 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.504376 1441872 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0131 02:56:37.504422 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.528276 1441872 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0131 02:56:37.528316 1441872 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0131 02:56:37.528362 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.531876 1441872 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0131 02:56:37.531919 1441872 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0131 02:56:37.531957 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0131 02:56:37.531981 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0131 02:56:37.532019 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0131 02:56:37.532041 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0131 02:56:37.531959 1441872 ssh_runner.go:195] Run: which crictl
	I0131 02:56:37.532106 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0131 02:56:37.534307 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0131 02:56:37.653244 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0131 02:56:37.653365 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0131 02:56:37.653448 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0131 02:56:37.653549 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0131 02:56:37.666167 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0131 02:56:37.666194 1441872 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0131 02:56:37.666281 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0131 02:56:37.666304 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0131 02:56:37.666385 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0131 02:56:37.678959 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0131 02:56:37.679105 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0131 02:56:37.688788 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0131 02:56:37.688841 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0131 02:56:37.688861 1441872 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0131 02:56:37.688884 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0131 02:56:37.688907 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0131 02:56:37.688916 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0131 02:56:37.733823 1441872 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0131 02:56:37.733891 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0131 02:56:37.733929 1441872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0131 02:56:37.733973 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0131 02:56:37.734023 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0131 02:56:37.734074 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0131 02:56:38.008860 1441872 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:56:40.458546 1441872 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (2.724591933s)
	I0131 02:56:40.458594 1441872 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0131 02:56:40.458636 1441872 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.449738548s)
	I0131 02:56:40.458655 1441872 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.769717425s)
	I0131 02:56:40.458675 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0131 02:56:40.458702 1441872 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0131 02:56:40.458761 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0131 02:56:40.593933 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0131 02:56:40.593978 1441872 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0131 02:56:40.594040 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0131 02:56:41.037036 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0131 02:56:41.037084 1441872 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0131 02:56:41.037155 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0131 02:56:41.780329 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0131 02:56:41.780388 1441872 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0131 02:56:41.780450 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0131 02:56:44.030684 1441872 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.250201488s)
	I0131 02:56:44.030729 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0131 02:56:44.030755 1441872 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0131 02:56:44.030813 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0131 02:56:44.776026 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0131 02:56:44.776087 1441872 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0131 02:56:44.776146 1441872 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0131 02:56:45.212655 1441872 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0131 02:56:45.212713 1441872 cache_images.go:123] Successfully loaded all cached images
	I0131 02:56:45.212721 1441872 cache_images.go:92] LoadImages completed in 8.075302312s
	I0131 02:56:45.212796 1441872 ssh_runner.go:195] Run: crio config
	I0131 02:56:45.270774 1441872 cni.go:84] Creating CNI manager for ""
	I0131 02:56:45.270796 1441872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:56:45.270816 1441872 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 02:56:45.270835 1441872 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.101 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-723521 NodeName:test-preload-723521 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 02:56:45.270964 1441872 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-723521"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 02:56:45.271025 1441872 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-723521 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-723521 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 02:56:45.271076 1441872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0131 02:56:45.279444 1441872 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 02:56:45.279523 1441872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 02:56:45.287704 1441872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0131 02:56:45.302983 1441872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 02:56:45.318122 1441872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 02:56:45.334052 1441872 ssh_runner.go:195] Run: grep 192.168.39.101	control-plane.minikube.internal$ /etc/hosts
	I0131 02:56:45.337683 1441872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 02:56:45.349646 1441872 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521 for IP: 192.168.39.101
	I0131 02:56:45.349694 1441872 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:56:45.349875 1441872 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 02:56:45.349924 1441872 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 02:56:45.350013 1441872 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.key
	I0131 02:56:45.350104 1441872 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/apiserver.key.396b98ae
	I0131 02:56:45.350159 1441872 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/proxy-client.key
	I0131 02:56:45.350314 1441872 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 02:56:45.350356 1441872 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 02:56:45.350369 1441872 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 02:56:45.350408 1441872 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 02:56:45.350441 1441872 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 02:56:45.350468 1441872 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 02:56:45.350541 1441872 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 02:56:45.351348 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 02:56:45.373699 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 02:56:45.395270 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 02:56:45.415984 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 02:56:45.436941 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 02:56:45.457431 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 02:56:45.477755 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 02:56:45.498619 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 02:56:45.518977 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 02:56:45.539629 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 02:56:45.559958 1441872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 02:56:45.580169 1441872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 02:56:45.595687 1441872 ssh_runner.go:195] Run: openssl version
	I0131 02:56:45.600923 1441872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 02:56:45.610389 1441872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 02:56:45.614390 1441872 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 02:56:45.614444 1441872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 02:56:45.619392 1441872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 02:56:45.628350 1441872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 02:56:45.637398 1441872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:56:45.641443 1441872 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:56:45.641491 1441872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 02:56:45.646467 1441872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 02:56:45.655861 1441872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 02:56:45.664979 1441872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 02:56:45.669121 1441872 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 02:56:45.669182 1441872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 02:56:45.674217 1441872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 02:56:45.683521 1441872 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 02:56:45.687577 1441872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 02:56:45.693192 1441872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 02:56:45.698462 1441872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 02:56:45.703965 1441872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 02:56:45.709385 1441872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 02:56:45.714634 1441872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 02:56:45.720109 1441872 kubeadm.go:404] StartCluster: {Name:test-preload-723521 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-723521 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:56:45.720189 1441872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 02:56:45.720233 1441872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 02:56:45.756197 1441872 cri.go:89] found id: ""
	I0131 02:56:45.756273 1441872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 02:56:45.765754 1441872 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 02:56:45.765779 1441872 kubeadm.go:636] restartCluster start
	I0131 02:56:45.765828 1441872 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 02:56:45.774361 1441872 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:45.774869 1441872 kubeconfig.go:135] verify returned: extract IP: "test-preload-723521" does not appear in /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:56:45.774996 1441872 kubeconfig.go:146] "test-preload-723521" context is missing from /home/jenkins/minikube-integration/18051-1412717/kubeconfig - will repair!
	I0131 02:56:45.775311 1441872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:56:45.775946 1441872 kapi.go:59] client config for test-preload-723521: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:56:45.776709 1441872 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 02:56:45.785249 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:45.785335 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:45.795793 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:46.285740 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:46.285859 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:46.297167 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:46.785818 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:46.785930 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:46.796936 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:47.286150 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:47.286242 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:47.297492 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:47.786264 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:47.786358 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:47.798106 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:48.285577 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:48.285685 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:48.297269 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:48.785824 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:48.785941 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:48.797562 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:49.286240 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:49.286341 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:49.297894 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:49.785460 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:49.785549 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:49.796294 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:50.285910 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:50.286011 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:50.296936 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:50.785873 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:50.785977 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:50.796979 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:51.286088 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:51.286174 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:51.297572 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:51.786231 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:51.786335 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:51.797196 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:52.285713 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:52.285810 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:52.297026 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:52.785537 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:52.785637 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:52.796482 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:53.286114 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:53.286217 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:53.297379 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:53.786023 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:53.786134 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:53.797397 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:54.286059 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:54.286171 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:54.297704 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:54.786342 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:54.786442 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:54.797442 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:55.286144 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:55.286269 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:55.297766 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:55.786135 1441872 api_server.go:166] Checking apiserver status ...
	I0131 02:56:55.786239 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 02:56:55.797383 1441872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 02:56:55.797424 1441872 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 02:56:55.797435 1441872 kubeadm.go:1135] stopping kube-system containers ...
	I0131 02:56:55.797447 1441872 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 02:56:55.797501 1441872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 02:56:55.834076 1441872 cri.go:89] found id: ""
	I0131 02:56:55.834164 1441872 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 02:56:55.848845 1441872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 02:56:55.858858 1441872 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 02:56:55.858944 1441872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 02:56:55.867997 1441872 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 02:56:55.868032 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:56:55.982829 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:56:56.755065 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:56:57.097277 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:56:57.164432 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:56:57.265254 1441872 api_server.go:52] waiting for apiserver process to appear ...
	I0131 02:56:57.265342 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:56:57.765772 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:56:58.265986 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:56:58.766125 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:56:59.265883 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:56:59.288607 1441872 api_server.go:72] duration metric: took 2.02335042s to wait for apiserver process to appear ...
	I0131 02:56:59.288647 1441872 api_server.go:88] waiting for apiserver healthz status ...
	I0131 02:56:59.288667 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:04.196286 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 02:57:04.196321 1441872 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 02:57:04.196334 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:04.242803 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 02:57:04.242836 1441872 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 02:57:04.288977 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:04.301010 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0131 02:57:04.301048 1441872 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0131 02:57:04.789680 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:04.797913 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0131 02:57:04.797943 1441872 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0131 02:57:05.289729 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:05.297425 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0131 02:57:05.297453 1441872 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0131 02:57:05.788741 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:05.795171 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I0131 02:57:05.803248 1441872 api_server.go:141] control plane version: v1.24.4
	I0131 02:57:05.803279 1441872 api_server.go:131] duration metric: took 6.514625708s to wait for apiserver health ...
	I0131 02:57:05.803288 1441872 cni.go:84] Creating CNI manager for ""
	I0131 02:57:05.803294 1441872 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:57:05.805437 1441872 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 02:57:05.807025 1441872 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 02:57:05.817313 1441872 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 02:57:05.835003 1441872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 02:57:05.843361 1441872 system_pods.go:59] 7 kube-system pods found
	I0131 02:57:05.843392 1441872 system_pods.go:61] "coredns-6d4b75cb6d-8kmws" [97688208-d9f8-418c-9a1d-43b2b84ea258] Running
	I0131 02:57:05.843400 1441872 system_pods.go:61] "etcd-test-preload-723521" [49fc4a5c-a2ab-4788-9a6c-d31597afadc6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 02:57:05.843407 1441872 system_pods.go:61] "kube-apiserver-test-preload-723521" [be1ef3fe-4321-4009-b9ea-b7f0525a0bc3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 02:57:05.843416 1441872 system_pods.go:61] "kube-controller-manager-test-preload-723521" [32bcfb44-26c0-4e31-be8b-cdd16005749f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 02:57:05.843422 1441872 system_pods.go:61] "kube-proxy-gs6f8" [d6e540ed-42fd-43cb-8918-9f64a286c41e] Running
	I0131 02:57:05.843430 1441872 system_pods.go:61] "kube-scheduler-test-preload-723521" [7dd412c3-fdab-4528-b138-e5b7a13a60ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 02:57:05.843437 1441872 system_pods.go:61] "storage-provisioner" [027d4c7f-1fe0-4e5b-a575-2629ddd8a147] Running
	I0131 02:57:05.843444 1441872 system_pods.go:74] duration metric: took 8.412958ms to wait for pod list to return data ...
	I0131 02:57:05.843455 1441872 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:57:05.846844 1441872 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:57:05.846886 1441872 node_conditions.go:123] node cpu capacity is 2
	I0131 02:57:05.846903 1441872 node_conditions.go:105] duration metric: took 3.438896ms to run NodePressure ...
	I0131 02:57:05.846925 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 02:57:06.053637 1441872 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 02:57:06.057881 1441872 kubeadm.go:787] kubelet initialised
	I0131 02:57:06.057902 1441872 kubeadm.go:788] duration metric: took 4.232525ms waiting for restarted kubelet to initialise ...
	I0131 02:57:06.057910 1441872 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:57:06.066441 1441872 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:06.072120 1441872 pod_ready.go:97] node "test-preload-723521" hosting pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.072145 1441872 pod_ready.go:81] duration metric: took 5.672579ms waiting for pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace to be "Ready" ...
	E0131 02:57:06.072153 1441872 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-723521" hosting pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.072159 1441872 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:06.077552 1441872 pod_ready.go:97] node "test-preload-723521" hosting pod "etcd-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.077583 1441872 pod_ready.go:81] duration metric: took 5.414958ms waiting for pod "etcd-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	E0131 02:57:06.077594 1441872 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-723521" hosting pod "etcd-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.077602 1441872 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:06.081476 1441872 pod_ready.go:97] node "test-preload-723521" hosting pod "kube-apiserver-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.081500 1441872 pod_ready.go:81] duration metric: took 3.889939ms waiting for pod "kube-apiserver-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	E0131 02:57:06.081510 1441872 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-723521" hosting pod "kube-apiserver-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.081519 1441872 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:06.239299 1441872 pod_ready.go:97] node "test-preload-723521" hosting pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.239337 1441872 pod_ready.go:81] duration metric: took 157.806297ms waiting for pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	E0131 02:57:06.239350 1441872 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-723521" hosting pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.239359 1441872 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gs6f8" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:06.639652 1441872 pod_ready.go:97] node "test-preload-723521" hosting pod "kube-proxy-gs6f8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.639691 1441872 pod_ready.go:81] duration metric: took 400.321229ms waiting for pod "kube-proxy-gs6f8" in "kube-system" namespace to be "Ready" ...
	E0131 02:57:06.639718 1441872 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-723521" hosting pod "kube-proxy-gs6f8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:06.639728 1441872 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:07.038807 1441872 pod_ready.go:97] node "test-preload-723521" hosting pod "kube-scheduler-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:07.038844 1441872 pod_ready.go:81] duration metric: took 399.103405ms waiting for pod "kube-scheduler-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	E0131 02:57:07.038857 1441872 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-723521" hosting pod "kube-scheduler-test-preload-723521" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:07.038867 1441872 pod_ready.go:38] duration metric: took 980.948142ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:57:07.038891 1441872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 02:57:07.055544 1441872 ops.go:34] apiserver oom_adj: -16
	I0131 02:57:07.055571 1441872 kubeadm.go:640] restartCluster took 21.289784314s
	I0131 02:57:07.055581 1441872 kubeadm.go:406] StartCluster complete in 21.33548216s
	I0131 02:57:07.055599 1441872 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:57:07.055687 1441872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:57:07.056248 1441872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:57:07.056525 1441872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 02:57:07.056607 1441872 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 02:57:07.056699 1441872 addons.go:69] Setting storage-provisioner=true in profile "test-preload-723521"
	I0131 02:57:07.056720 1441872 addons.go:69] Setting default-storageclass=true in profile "test-preload-723521"
	I0131 02:57:07.056750 1441872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-723521"
	I0131 02:57:07.056779 1441872 config.go:182] Loaded profile config "test-preload-723521": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0131 02:57:07.056725 1441872 addons.go:234] Setting addon storage-provisioner=true in "test-preload-723521"
	W0131 02:57:07.056839 1441872 addons.go:243] addon storage-provisioner should already be in state true
	I0131 02:57:07.056888 1441872 host.go:66] Checking if "test-preload-723521" exists ...
	I0131 02:57:07.057076 1441872 kapi.go:59] client config for test-preload-723521: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:57:07.057162 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:57:07.057232 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:57:07.057252 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:57:07.057300 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:57:07.060954 1441872 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-723521" context rescaled to 1 replicas
	I0131 02:57:07.060996 1441872 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 02:57:07.063472 1441872 out.go:177] * Verifying Kubernetes components...
	I0131 02:57:07.065300 1441872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:57:07.072632 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0131 02:57:07.073152 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:57:07.073734 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:57:07.073763 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:57:07.074123 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:57:07.074635 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:57:07.074697 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:57:07.077801 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0131 02:57:07.078352 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:57:07.078932 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:57:07.078966 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:57:07.079383 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:57:07.079578 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetState
	I0131 02:57:07.082454 1441872 kapi.go:59] client config for test-preload-723521: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.crt", KeyFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/test-preload-723521/client.key", CAFile:"/home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0131 02:57:07.082811 1441872 addons.go:234] Setting addon default-storageclass=true in "test-preload-723521"
	W0131 02:57:07.082834 1441872 addons.go:243] addon default-storageclass should already be in state true
	I0131 02:57:07.082864 1441872 host.go:66] Checking if "test-preload-723521" exists ...
	I0131 02:57:07.083276 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:57:07.083326 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:57:07.091708 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0131 02:57:07.092149 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:57:07.092746 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:57:07.092781 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:57:07.093131 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:57:07.093381 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetState
	I0131 02:57:07.095340 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:57:07.097756 1441872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 02:57:07.099265 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37985
	I0131 02:57:07.099333 1441872 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 02:57:07.099354 1441872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 02:57:07.099379 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:57:07.099674 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:57:07.100450 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:57:07.100472 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:57:07.100892 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:57:07.101642 1441872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:57:07.101701 1441872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:57:07.103310 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:57:07.103915 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:57:07.103946 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:57:07.104272 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:57:07.104504 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:57:07.104727 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:57:07.104909 1441872 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa Username:docker}
	I0131 02:57:07.117700 1441872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I0131 02:57:07.118116 1441872 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:57:07.118623 1441872 main.go:141] libmachine: Using API Version  1
	I0131 02:57:07.118652 1441872 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:57:07.118977 1441872 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:57:07.119208 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetState
	I0131 02:57:07.121056 1441872 main.go:141] libmachine: (test-preload-723521) Calling .DriverName
	I0131 02:57:07.121326 1441872 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 02:57:07.121342 1441872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 02:57:07.121358 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHHostname
	I0131 02:57:07.124332 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:57:07.124779 1441872 main.go:141] libmachine: (test-preload-723521) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:7a:51", ip: ""} in network mk-test-preload-723521: {Iface:virbr1 ExpiryTime:2024-01-31 03:56:22 +0000 UTC Type:0 Mac:52:54:00:4e:7a:51 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-723521 Clientid:01:52:54:00:4e:7a:51}
	I0131 02:57:07.124815 1441872 main.go:141] libmachine: (test-preload-723521) DBG | domain test-preload-723521 has defined IP address 192.168.39.101 and MAC address 52:54:00:4e:7a:51 in network mk-test-preload-723521
	I0131 02:57:07.125015 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHPort
	I0131 02:57:07.125234 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHKeyPath
	I0131 02:57:07.125415 1441872 main.go:141] libmachine: (test-preload-723521) Calling .GetSSHUsername
	I0131 02:57:07.125581 1441872 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/test-preload-723521/id_rsa Username:docker}
	I0131 02:57:07.225117 1441872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 02:57:07.297189 1441872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 02:57:07.326467 1441872 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0131 02:57:07.326513 1441872 node_ready.go:35] waiting up to 6m0s for node "test-preload-723521" to be "Ready" ...
	I0131 02:57:08.307082 1441872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.081921087s)
	I0131 02:57:08.307147 1441872 main.go:141] libmachine: Making call to close driver server
	I0131 02:57:08.307145 1441872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.009916118s)
	I0131 02:57:08.307160 1441872 main.go:141] libmachine: (test-preload-723521) Calling .Close
	I0131 02:57:08.307189 1441872 main.go:141] libmachine: Making call to close driver server
	I0131 02:57:08.307205 1441872 main.go:141] libmachine: (test-preload-723521) Calling .Close
	I0131 02:57:08.307478 1441872 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:57:08.307495 1441872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:57:08.307513 1441872 main.go:141] libmachine: Making call to close driver server
	I0131 02:57:08.307521 1441872 main.go:141] libmachine: (test-preload-723521) Calling .Close
	I0131 02:57:08.307587 1441872 main.go:141] libmachine: (test-preload-723521) DBG | Closing plugin on server side
	I0131 02:57:08.307610 1441872 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:57:08.307626 1441872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:57:08.307638 1441872 main.go:141] libmachine: Making call to close driver server
	I0131 02:57:08.307645 1441872 main.go:141] libmachine: (test-preload-723521) Calling .Close
	I0131 02:57:08.307768 1441872 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:57:08.307782 1441872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:57:08.307841 1441872 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:57:08.307855 1441872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:57:08.307871 1441872 main.go:141] libmachine: (test-preload-723521) DBG | Closing plugin on server side
	I0131 02:57:08.314010 1441872 main.go:141] libmachine: Making call to close driver server
	I0131 02:57:08.314030 1441872 main.go:141] libmachine: (test-preload-723521) Calling .Close
	I0131 02:57:08.314294 1441872 main.go:141] libmachine: Successfully made call to close driver server
	I0131 02:57:08.314316 1441872 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 02:57:08.314316 1441872 main.go:141] libmachine: (test-preload-723521) DBG | Closing plugin on server side
	I0131 02:57:08.316917 1441872 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0131 02:57:08.318142 1441872 addons.go:505] enable addons completed in 1.261554485s: enabled=[storage-provisioner default-storageclass]
	I0131 02:57:09.344271 1441872 node_ready.go:58] node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:11.831122 1441872 node_ready.go:58] node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:14.331183 1441872 node_ready.go:58] node "test-preload-723521" has status "Ready":"False"
	I0131 02:57:14.831471 1441872 node_ready.go:49] node "test-preload-723521" has status "Ready":"True"
	I0131 02:57:14.831497 1441872 node_ready.go:38] duration metric: took 7.504964403s waiting for node "test-preload-723521" to be "Ready" ...
	I0131 02:57:14.831508 1441872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:57:14.837055 1441872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:14.842085 1441872 pod_ready.go:92] pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace has status "Ready":"True"
	I0131 02:57:14.842114 1441872 pod_ready.go:81] duration metric: took 5.027276ms waiting for pod "coredns-6d4b75cb6d-8kmws" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:14.842127 1441872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:15.349184 1441872 pod_ready.go:92] pod "etcd-test-preload-723521" in "kube-system" namespace has status "Ready":"True"
	I0131 02:57:15.349213 1441872 pod_ready.go:81] duration metric: took 507.077082ms waiting for pod "etcd-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:15.349227 1441872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:15.354257 1441872 pod_ready.go:92] pod "kube-apiserver-test-preload-723521" in "kube-system" namespace has status "Ready":"True"
	I0131 02:57:15.354285 1441872 pod_ready.go:81] duration metric: took 5.050609ms waiting for pod "kube-apiserver-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:15.354297 1441872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:17.362134 1441872 pod_ready.go:102] pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace has status "Ready":"False"
	I0131 02:57:17.860177 1441872 pod_ready.go:92] pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace has status "Ready":"True"
	I0131 02:57:17.860209 1441872 pod_ready.go:81] duration metric: took 2.505902203s waiting for pod "kube-controller-manager-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:17.860223 1441872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gs6f8" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:17.864844 1441872 pod_ready.go:92] pod "kube-proxy-gs6f8" in "kube-system" namespace has status "Ready":"True"
	I0131 02:57:17.864871 1441872 pod_ready.go:81] duration metric: took 4.64093ms waiting for pod "kube-proxy-gs6f8" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:17.864883 1441872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:18.030506 1441872 pod_ready.go:92] pod "kube-scheduler-test-preload-723521" in "kube-system" namespace has status "Ready":"True"
	I0131 02:57:18.030539 1441872 pod_ready.go:81] duration metric: took 165.646521ms waiting for pod "kube-scheduler-test-preload-723521" in "kube-system" namespace to be "Ready" ...
	I0131 02:57:18.030552 1441872 pod_ready.go:38] duration metric: took 3.199034536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 02:57:18.030570 1441872 api_server.go:52] waiting for apiserver process to appear ...
	I0131 02:57:18.030635 1441872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:57:18.043461 1441872 api_server.go:72] duration metric: took 10.982432818s to wait for apiserver process to appear ...
	I0131 02:57:18.043492 1441872 api_server.go:88] waiting for apiserver healthz status ...
	I0131 02:57:18.043519 1441872 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I0131 02:57:18.049040 1441872 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I0131 02:57:18.050121 1441872 api_server.go:141] control plane version: v1.24.4
	I0131 02:57:18.050150 1441872 api_server.go:131] duration metric: took 6.648536ms to wait for apiserver health ...
	I0131 02:57:18.050159 1441872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 02:57:18.233197 1441872 system_pods.go:59] 7 kube-system pods found
	I0131 02:57:18.233229 1441872 system_pods.go:61] "coredns-6d4b75cb6d-8kmws" [97688208-d9f8-418c-9a1d-43b2b84ea258] Running
	I0131 02:57:18.233234 1441872 system_pods.go:61] "etcd-test-preload-723521" [49fc4a5c-a2ab-4788-9a6c-d31597afadc6] Running
	I0131 02:57:18.233239 1441872 system_pods.go:61] "kube-apiserver-test-preload-723521" [be1ef3fe-4321-4009-b9ea-b7f0525a0bc3] Running
	I0131 02:57:18.233243 1441872 system_pods.go:61] "kube-controller-manager-test-preload-723521" [32bcfb44-26c0-4e31-be8b-cdd16005749f] Running
	I0131 02:57:18.233247 1441872 system_pods.go:61] "kube-proxy-gs6f8" [d6e540ed-42fd-43cb-8918-9f64a286c41e] Running
	I0131 02:57:18.233251 1441872 system_pods.go:61] "kube-scheduler-test-preload-723521" [7dd412c3-fdab-4528-b138-e5b7a13a60ba] Running
	I0131 02:57:18.233254 1441872 system_pods.go:61] "storage-provisioner" [027d4c7f-1fe0-4e5b-a575-2629ddd8a147] Running
	I0131 02:57:18.233266 1441872 system_pods.go:74] duration metric: took 183.100666ms to wait for pod list to return data ...
	I0131 02:57:18.233274 1441872 default_sa.go:34] waiting for default service account to be created ...
	I0131 02:57:18.431210 1441872 default_sa.go:45] found service account: "default"
	I0131 02:57:18.431245 1441872 default_sa.go:55] duration metric: took 197.964614ms for default service account to be created ...
	I0131 02:57:18.431255 1441872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 02:57:18.633925 1441872 system_pods.go:86] 7 kube-system pods found
	I0131 02:57:18.633956 1441872 system_pods.go:89] "coredns-6d4b75cb6d-8kmws" [97688208-d9f8-418c-9a1d-43b2b84ea258] Running
	I0131 02:57:18.633961 1441872 system_pods.go:89] "etcd-test-preload-723521" [49fc4a5c-a2ab-4788-9a6c-d31597afadc6] Running
	I0131 02:57:18.633966 1441872 system_pods.go:89] "kube-apiserver-test-preload-723521" [be1ef3fe-4321-4009-b9ea-b7f0525a0bc3] Running
	I0131 02:57:18.633970 1441872 system_pods.go:89] "kube-controller-manager-test-preload-723521" [32bcfb44-26c0-4e31-be8b-cdd16005749f] Running
	I0131 02:57:18.633974 1441872 system_pods.go:89] "kube-proxy-gs6f8" [d6e540ed-42fd-43cb-8918-9f64a286c41e] Running
	I0131 02:57:18.633978 1441872 system_pods.go:89] "kube-scheduler-test-preload-723521" [7dd412c3-fdab-4528-b138-e5b7a13a60ba] Running
	I0131 02:57:18.633981 1441872 system_pods.go:89] "storage-provisioner" [027d4c7f-1fe0-4e5b-a575-2629ddd8a147] Running
	I0131 02:57:18.633987 1441872 system_pods.go:126] duration metric: took 202.727608ms to wait for k8s-apps to be running ...
	I0131 02:57:18.633994 1441872 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 02:57:18.634038 1441872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:57:18.647416 1441872 system_svc.go:56] duration metric: took 13.411939ms WaitForService to wait for kubelet.
	I0131 02:57:18.647444 1441872 kubeadm.go:581] duration metric: took 11.586426646s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 02:57:18.647463 1441872 node_conditions.go:102] verifying NodePressure condition ...
	I0131 02:57:18.831236 1441872 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 02:57:18.831272 1441872 node_conditions.go:123] node cpu capacity is 2
	I0131 02:57:18.831283 1441872 node_conditions.go:105] duration metric: took 183.815382ms to run NodePressure ...
	I0131 02:57:18.831295 1441872 start.go:228] waiting for startup goroutines ...
	I0131 02:57:18.831301 1441872 start.go:233] waiting for cluster config update ...
	I0131 02:57:18.831310 1441872 start.go:242] writing updated cluster config ...
	I0131 02:57:18.831575 1441872 ssh_runner.go:195] Run: rm -f paused
	I0131 02:57:18.881004 1441872 start.go:600] kubectl: 1.29.1, cluster: 1.24.4 (minor skew: 5)
	I0131 02:57:18.883113 1441872 out.go:177] 
	W0131 02:57:18.884734 1441872 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0131 02:57:18.886272 1441872 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0131 02:57:18.887774 1441872 out.go:177] * Done! kubectl is now configured to use "test-preload-723521" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 02:56:21 UTC, ends at Wed 2024-01-31 02:57:19 UTC. --
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.833471222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706669839833456101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=1df2b638-6a6a-4f52-b625-18930f0a4f85 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.834310022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0d7ba9b5-52e6-4b37-9d73-37122b34cc85 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.834385012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0d7ba9b5-52e6-4b37-9d73-37122b34cc85 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.834648748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6b74c5146f4d8e6632997a2400bad714c2d8ab0fa8e35d1ee3adac7b38d467d,PodSandboxId:819b27d03363ee06b852c8dd124717c6cc83e7d0a4d1415fc5f1a9c969d3c7be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1706669829711651547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8kmws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97688208-d9f8-418c-9a1d-43b2b84ea258,},Annotations:map[string]string{io.kubernetes.container.hash: 9744e065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659c4862f6511c03bbfe46869a157e3d2084620bc44e82b0cebd431f49f2df5,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706669827383309983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 027d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a3e5b56c653ce8af22d0cecfe5246de0aa159e673e841533b81c838f4324d5,PodSandboxId:e406227e7388f589e59eee6ce2d5a80076a39015e35d9d54922d5a6742bb0012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1706669826843013357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gs6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e540ed-42fd-43cb-8918-9f64a286c41e,},Annotations:map[string]string{io.kubernetes.container.hash: 161eb9b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29305a61526b91f2025bc59e5ec759fe56ad1d06771ae4a605cf6854dd0fa8da,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706669826540670001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
7d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943e6adaa10ad8708361ebe65f560d043f108f137b16a925f78654017ab81ed7,PodSandboxId:a9c203600246d590a0ddffd50ffe4abc9339af0528364b4abe4cdcef39872ba1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1706669818902508087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bdab4097702d8da4a3fadd831722324,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2a936566,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80a59e1f864df6fff084e733b39be160d8a301619a720d91b8a28e937b50fb,PodSandboxId:062be28848706c928d6dd4215ad6731d9e96cd0e7c1fc4a5ca6ce65b5dac5229,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1706669818589299047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33490b925121321f1abe27417123861,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16ae968eec9586e044fd292b2ca5156e98f69632907afcb1442db0bf1fe48a8,PodSandboxId:cbb8185804ef7ec2391d20f520adf7b417e68e68d57255f54c1fcad45fe8041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1706669818401890263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652401d2974c3d2f12b285e64620264b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31118c7844049d3e57a732f4cc2139bf9e1cbb003d94702e8a611e86d3353cd4,PodSandboxId:303bc1ff3085ae0f607ea9edb23405ac585d8bcd51165fd2601562633e6dc831,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1706669818156991663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e038cb4dc13132c3d39307b3c319016f,},Annotations:map[string]
string{io.kubernetes.container.hash: c1a9d067,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0d7ba9b5-52e6-4b37-9d73-37122b34cc85 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.872168890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3bfd5c9a-9a64-4cbc-9ce7-dd66cc236084 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.872229123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3bfd5c9a-9a64-4cbc-9ce7-dd66cc236084 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.873378931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d97b1af7-631c-4080-b1f5-abf110c96472 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.873898836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706669839873883777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=d97b1af7-631c-4080-b1f5-abf110c96472 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.874335676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8e3c3a14-2309-4d63-a75a-966864347651 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.874389426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8e3c3a14-2309-4d63-a75a-966864347651 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.874552823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6b74c5146f4d8e6632997a2400bad714c2d8ab0fa8e35d1ee3adac7b38d467d,PodSandboxId:819b27d03363ee06b852c8dd124717c6cc83e7d0a4d1415fc5f1a9c969d3c7be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1706669829711651547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8kmws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97688208-d9f8-418c-9a1d-43b2b84ea258,},Annotations:map[string]string{io.kubernetes.container.hash: 9744e065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659c4862f6511c03bbfe46869a157e3d2084620bc44e82b0cebd431f49f2df5,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706669827383309983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 027d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a3e5b56c653ce8af22d0cecfe5246de0aa159e673e841533b81c838f4324d5,PodSandboxId:e406227e7388f589e59eee6ce2d5a80076a39015e35d9d54922d5a6742bb0012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1706669826843013357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gs6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e540ed-42fd-43cb-8918-9f64a286c41e,},Annotations:map[string]string{io.kubernetes.container.hash: 161eb9b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29305a61526b91f2025bc59e5ec759fe56ad1d06771ae4a605cf6854dd0fa8da,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706669826540670001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
7d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943e6adaa10ad8708361ebe65f560d043f108f137b16a925f78654017ab81ed7,PodSandboxId:a9c203600246d590a0ddffd50ffe4abc9339af0528364b4abe4cdcef39872ba1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1706669818902508087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bdab4097702d8da4a3fadd831722324,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2a936566,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80a59e1f864df6fff084e733b39be160d8a301619a720d91b8a28e937b50fb,PodSandboxId:062be28848706c928d6dd4215ad6731d9e96cd0e7c1fc4a5ca6ce65b5dac5229,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1706669818589299047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33490b925121321f1abe27417123861,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16ae968eec9586e044fd292b2ca5156e98f69632907afcb1442db0bf1fe48a8,PodSandboxId:cbb8185804ef7ec2391d20f520adf7b417e68e68d57255f54c1fcad45fe8041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1706669818401890263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652401d2974c3d2f12b285e64620264b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31118c7844049d3e57a732f4cc2139bf9e1cbb003d94702e8a611e86d3353cd4,PodSandboxId:303bc1ff3085ae0f607ea9edb23405ac585d8bcd51165fd2601562633e6dc831,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1706669818156991663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e038cb4dc13132c3d39307b3c319016f,},Annotations:map[string]
string{io.kubernetes.container.hash: c1a9d067,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8e3c3a14-2309-4d63-a75a-966864347651 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.915977182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f60e3ce-dc0b-42ee-ae81-f397bd2df380 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.916058773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f60e3ce-dc0b-42ee-ae81-f397bd2df380 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.917862786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2474366c-5707-4338-ad5e-cee18983dffd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.918407265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706669839918391037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=2474366c-5707-4338-ad5e-cee18983dffd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.919361082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3018e7a5-9e2d-4cea-bb76-79be6c42ee4e name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.919438033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3018e7a5-9e2d-4cea-bb76-79be6c42ee4e name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.919781160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6b74c5146f4d8e6632997a2400bad714c2d8ab0fa8e35d1ee3adac7b38d467d,PodSandboxId:819b27d03363ee06b852c8dd124717c6cc83e7d0a4d1415fc5f1a9c969d3c7be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1706669829711651547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8kmws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97688208-d9f8-418c-9a1d-43b2b84ea258,},Annotations:map[string]string{io.kubernetes.container.hash: 9744e065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659c4862f6511c03bbfe46869a157e3d2084620bc44e82b0cebd431f49f2df5,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706669827383309983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 027d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a3e5b56c653ce8af22d0cecfe5246de0aa159e673e841533b81c838f4324d5,PodSandboxId:e406227e7388f589e59eee6ce2d5a80076a39015e35d9d54922d5a6742bb0012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1706669826843013357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gs6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e540ed-42fd-43cb-8918-9f64a286c41e,},Annotations:map[string]string{io.kubernetes.container.hash: 161eb9b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29305a61526b91f2025bc59e5ec759fe56ad1d06771ae4a605cf6854dd0fa8da,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706669826540670001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
7d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943e6adaa10ad8708361ebe65f560d043f108f137b16a925f78654017ab81ed7,PodSandboxId:a9c203600246d590a0ddffd50ffe4abc9339af0528364b4abe4cdcef39872ba1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1706669818902508087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bdab4097702d8da4a3fadd831722324,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2a936566,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80a59e1f864df6fff084e733b39be160d8a301619a720d91b8a28e937b50fb,PodSandboxId:062be28848706c928d6dd4215ad6731d9e96cd0e7c1fc4a5ca6ce65b5dac5229,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1706669818589299047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33490b925121321f1abe27417123861,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16ae968eec9586e044fd292b2ca5156e98f69632907afcb1442db0bf1fe48a8,PodSandboxId:cbb8185804ef7ec2391d20f520adf7b417e68e68d57255f54c1fcad45fe8041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1706669818401890263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652401d2974c3d2f12b285e64620264b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31118c7844049d3e57a732f4cc2139bf9e1cbb003d94702e8a611e86d3353cd4,PodSandboxId:303bc1ff3085ae0f607ea9edb23405ac585d8bcd51165fd2601562633e6dc831,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1706669818156991663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e038cb4dc13132c3d39307b3c319016f,},Annotations:map[string]
string{io.kubernetes.container.hash: c1a9d067,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3018e7a5-9e2d-4cea-bb76-79be6c42ee4e name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.964919881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5e623eca-5f95-4b73-8aab-c9665da3fa28 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.965000658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5e623eca-5f95-4b73-8aab-c9665da3fa28 name=/runtime.v1.RuntimeService/Version
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.967100396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f83caac9-e57b-4fdf-856e-aae57b51ce48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.967646055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706669839967621011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=f83caac9-e57b-4fdf-856e-aae57b51ce48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.968465283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ffbddb3-a56c-4ead-9270-5098732f849c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.968532239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ffbddb3-a56c-4ead-9270-5098732f849c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 02:57:19 test-preload-723521 crio[709]: time="2024-01-31 02:57:19.968817897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6b74c5146f4d8e6632997a2400bad714c2d8ab0fa8e35d1ee3adac7b38d467d,PodSandboxId:819b27d03363ee06b852c8dd124717c6cc83e7d0a4d1415fc5f1a9c969d3c7be,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1706669829711651547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8kmws,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97688208-d9f8-418c-9a1d-43b2b84ea258,},Annotations:map[string]string{io.kubernetes.container.hash: 9744e065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c659c4862f6511c03bbfe46869a157e3d2084620bc44e82b0cebd431f49f2df5,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706669827383309983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 027d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a3e5b56c653ce8af22d0cecfe5246de0aa159e673e841533b81c838f4324d5,PodSandboxId:e406227e7388f589e59eee6ce2d5a80076a39015e35d9d54922d5a6742bb0012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1706669826843013357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gs6f8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d6e540ed-42fd-43cb-8918-9f64a286c41e,},Annotations:map[string]string{io.kubernetes.container.hash: 161eb9b8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29305a61526b91f2025bc59e5ec759fe56ad1d06771ae4a605cf6854dd0fa8da,PodSandboxId:5e089cdfb523897e3504c728bd2e931f7e2000dd3d13e20133a5a0e58a3b5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706669826540670001,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
7d4c7f-1fe0-4e5b-a575-2629ddd8a147,},Annotations:map[string]string{io.kubernetes.container.hash: e0b82bff,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943e6adaa10ad8708361ebe65f560d043f108f137b16a925f78654017ab81ed7,PodSandboxId:a9c203600246d590a0ddffd50ffe4abc9339af0528364b4abe4cdcef39872ba1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1706669818902508087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bdab4097702d8da4a3fadd831722324,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 2a936566,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae80a59e1f864df6fff084e733b39be160d8a301619a720d91b8a28e937b50fb,PodSandboxId:062be28848706c928d6dd4215ad6731d9e96cd0e7c1fc4a5ca6ce65b5dac5229,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1706669818589299047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c33490b925121321f1abe27417123861,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f16ae968eec9586e044fd292b2ca5156e98f69632907afcb1442db0bf1fe48a8,PodSandboxId:cbb8185804ef7ec2391d20f520adf7b417e68e68d57255f54c1fcad45fe8041b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1706669818401890263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652401d2974c3d2f12b285e64620264b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31118c7844049d3e57a732f4cc2139bf9e1cbb003d94702e8a611e86d3353cd4,PodSandboxId:303bc1ff3085ae0f607ea9edb23405ac585d8bcd51165fd2601562633e6dc831,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1706669818156991663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-723521,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e038cb4dc13132c3d39307b3c319016f,},Annotations:map[string]
string{io.kubernetes.container.hash: c1a9d067,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ffbddb3-a56c-4ead-9270-5098732f849c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6b74c5146f4d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   10 seconds ago      Running             coredns                   1                   819b27d03363e       coredns-6d4b75cb6d-8kmws
	c659c4862f651       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       3                   5e089cdfb5238       storage-provisioner
	81a3e5b56c653       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   e406227e7388f       kube-proxy-gs6f8
	29305a61526b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       2                   5e089cdfb5238       storage-provisioner
	943e6adaa10ad       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   a9c203600246d       etcd-test-preload-723521
	ae80a59e1f864       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   062be28848706       kube-scheduler-test-preload-723521
	f16ae968eec95       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   cbb8185804ef7       kube-controller-manager-test-preload-723521
	31118c7844049       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   303bc1ff3085a       kube-apiserver-test-preload-723521
	
	
	==> coredns [a6b74c5146f4d8e6632997a2400bad714c2d8ab0fa8e35d1ee3adac7b38d467d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58748 - 7132 "HINFO IN 5097174819982139981.7033077406249204319. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010526124s
	
	
	==> describe nodes <==
	Name:               test-preload-723521
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-723521
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=test-preload-723521
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T02_55_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 02:55:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-723521
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 02:57:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 02:57:14 +0000   Wed, 31 Jan 2024 02:55:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 02:57:14 +0000   Wed, 31 Jan 2024 02:55:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 02:57:14 +0000   Wed, 31 Jan 2024 02:55:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 02:57:14 +0000   Wed, 31 Jan 2024 02:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    test-preload-723521
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e80587205dd44b1db5902283989162e5
	  System UUID:                e8058720-5dd4-4b1d-b590-2283989162e5
	  Boot ID:                    ab5c0f05-6434-47fa-a17a-0377fb23998e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8kmws                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     109s
	  kube-system                 etcd-test-preload-723521                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-test-preload-723521             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-test-preload-723521    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-gs6f8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-scheduler-test-preload-723521             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node test-preload-723521 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node test-preload-723521 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node test-preload-723521 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                112s               kubelet          Node test-preload-723521 status is now: NodeReady
	  Normal  RegisteredNode           110s               node-controller  Node test-preload-723521 event: Registered Node test-preload-723521 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-723521 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-723521 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-723521 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-723521 event: Registered Node test-preload-723521 in Controller
	
	
	==> dmesg <==
	[Jan31 02:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064814] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.339555] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.621016] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.128203] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.391140] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.053457] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.106780] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.141005] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.096266] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.205262] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +24.637485] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
	[Jan31 02:57] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.228699] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [943e6adaa10ad8708361ebe65f560d043f108f137b16a925f78654017ab81ed7] <==
	{"level":"info","ts":"2024-01-31T02:57:00.535Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"65e271b8f7cb8d0f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-01-31T02:57:00.537Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-01-31T02:57:00.541Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"65e271b8f7cb8d0f","initial-advertise-peer-urls":["https://192.168.39.101:2380"],"listen-peer-urls":["https://192.168.39.101:2380"],"advertise-client-urls":["https://192.168.39.101:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.101:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T02:57:00.541Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.101:2380"}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.101:2380"}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f switched to configuration voters=(7341555381812563215)"}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"24cb6133d13a326a","local-member-id":"65e271b8f7cb8d0f","added-peer-id":"65e271b8f7cb8d0f","added-peer-peer-urls":["https://192.168.39.101:2380"]}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"24cb6133d13a326a","local-member-id":"65e271b8f7cb8d0f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T02:57:00.542Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f received MsgPreVoteResp from 65e271b8f7cb8d0f at term 2"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became candidate at term 3"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f received MsgVoteResp from 65e271b8f7cb8d0f at term 3"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became leader at term 3"}
	{"level":"info","ts":"2024-01-31T02:57:01.724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 65e271b8f7cb8d0f elected leader 65e271b8f7cb8d0f at term 3"}
	{"level":"info","ts":"2024-01-31T02:57:01.726Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"65e271b8f7cb8d0f","local-member-attributes":"{Name:test-preload-723521 ClientURLs:[https://192.168.39.101:2379]}","request-path":"/0/members/65e271b8f7cb8d0f/attributes","cluster-id":"24cb6133d13a326a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T02:57:01.726Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T02:57:01.727Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T02:57:01.728Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.101:2379"}
	{"level":"info","ts":"2024-01-31T02:57:01.728Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T02:57:01.728Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T02:57:01.728Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:57:20 up 1 min,  0 users,  load average: 1.31, 0.36, 0.12
	Linux test-preload-723521 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [31118c7844049d3e57a732f4cc2139bf9e1cbb003d94702e8a611e86d3353cd4] <==
	I0131 02:57:04.109904       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0131 02:57:04.187957       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0131 02:57:04.109914       1 controller.go:83] Starting OpenAPI AggregationController
	I0131 02:57:04.112008       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0131 02:57:04.112019       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0131 02:57:04.114626       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0131 02:57:04.114947       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0131 02:57:04.271659       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0131 02:57:04.272897       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0131 02:57:04.288695       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0131 02:57:04.316438       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0131 02:57:04.319823       1 cache.go:39] Caches are synced for autoregister controller
	I0131 02:57:04.320138       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0131 02:57:04.326652       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0131 02:57:04.330154       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0131 02:57:04.798828       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0131 02:57:05.125552       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0131 02:57:05.932716       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0131 02:57:05.944249       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0131 02:57:05.987952       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0131 02:57:06.011843       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0131 02:57:06.018803       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0131 02:57:07.075247       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0131 02:57:17.056948       1 controller.go:611] quota admission added evaluator for: endpoints
	I0131 02:57:17.261212       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f16ae968eec9586e044fd292b2ca5156e98f69632907afcb1442db0bf1fe48a8] <==
	I0131 02:57:17.044164       1 shared_informer.go:262] Caches are synced for TTL
	I0131 02:57:17.052847       1 shared_informer.go:262] Caches are synced for HPA
	I0131 02:57:17.052956       1 shared_informer.go:262] Caches are synced for job
	I0131 02:57:17.053018       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0131 02:57:17.054661       1 shared_informer.go:262] Caches are synced for deployment
	I0131 02:57:17.062669       1 shared_informer.go:262] Caches are synced for namespace
	I0131 02:57:17.085071       1 shared_informer.go:262] Caches are synced for node
	I0131 02:57:17.085157       1 range_allocator.go:173] Starting range CIDR allocator
	I0131 02:57:17.085164       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0131 02:57:17.085174       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0131 02:57:17.102345       1 shared_informer.go:262] Caches are synced for taint
	I0131 02:57:17.102447       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0131 02:57:17.102515       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0131 02:57:17.102542       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-723521. Assuming now as a timestamp.
	I0131 02:57:17.102593       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0131 02:57:17.102684       1 event.go:294] "Event occurred" object="test-preload-723521" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-723521 event: Registered Node test-preload-723521 in Controller"
	I0131 02:57:17.130095       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0131 02:57:17.134392       1 shared_informer.go:262] Caches are synced for persistent volume
	I0131 02:57:17.138600       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0131 02:57:17.226566       1 shared_informer.go:262] Caches are synced for attach detach
	I0131 02:57:17.259219       1 shared_informer.go:262] Caches are synced for resource quota
	I0131 02:57:17.285650       1 shared_informer.go:262] Caches are synced for resource quota
	I0131 02:57:17.695419       1 shared_informer.go:262] Caches are synced for garbage collector
	I0131 02:57:17.748519       1 shared_informer.go:262] Caches are synced for garbage collector
	I0131 02:57:17.748599       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [81a3e5b56c653ce8af22d0cecfe5246de0aa159e673e841533b81c838f4324d5] <==
	I0131 02:57:07.002549       1 node.go:163] Successfully retrieved node IP: 192.168.39.101
	I0131 02:57:07.002708       1 server_others.go:138] "Detected node IP" address="192.168.39.101"
	I0131 02:57:07.002857       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0131 02:57:07.063600       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0131 02:57:07.063638       1 server_others.go:206] "Using iptables Proxier"
	I0131 02:57:07.063667       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0131 02:57:07.063951       1 server.go:661] "Version info" version="v1.24.4"
	I0131 02:57:07.063959       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 02:57:07.064832       1 config.go:317] "Starting service config controller"
	I0131 02:57:07.064866       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0131 02:57:07.065569       1 config.go:226] "Starting endpoint slice config controller"
	I0131 02:57:07.065607       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0131 02:57:07.071463       1 config.go:444] "Starting node config controller"
	I0131 02:57:07.071513       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0131 02:57:07.166642       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0131 02:57:07.170659       1 shared_informer.go:262] Caches are synced for service config
	I0131 02:57:07.183233       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ae80a59e1f864df6fff084e733b39be160d8a301619a720d91b8a28e937b50fb] <==
	I0131 02:57:00.834541       1 serving.go:348] Generated self-signed cert in-memory
	W0131 02:57:04.212806       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0131 02:57:04.212956       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 02:57:04.212998       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0131 02:57:04.213028       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0131 02:57:04.253290       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0131 02:57:04.253403       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 02:57:04.259655       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0131 02:57:04.261014       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0131 02:57:04.261065       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 02:57:04.261125       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0131 02:57:04.361189       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 02:56:21 UTC, ends at Wed 2024-01-31 02:57:20 UTC. --
	Jan 31 02:57:04 test-preload-723521 kubelet[1099]: E0131 02:57:04.258406    1099 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 31 02:57:04 test-preload-723521 kubelet[1099]: I0131 02:57:04.319359    1099 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-723521"
	Jan 31 02:57:04 test-preload-723521 kubelet[1099]: I0131 02:57:04.319467    1099 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-723521"
	Jan 31 02:57:04 test-preload-723521 kubelet[1099]: I0131 02:57:04.323284    1099 setters.go:532] "Node became not ready" node="test-preload-723521" condition={Type:Ready Status:False LastHeartbeatTime:2024-01-31 02:57:04.323219926 +0000 UTC m=+7.261717671 LastTransitionTime:2024-01-31 02:57:04.323219926 +0000 UTC m=+7.261717671 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.208985    1099 apiserver.go:52] "Watching apiserver"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.213866    1099 topology_manager.go:200] "Topology Admit Handler"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.214076    1099 topology_manager.go:200] "Topology Admit Handler"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.214129    1099 topology_manager.go:200] "Topology Admit Handler"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: E0131 02:57:05.231996    1099 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-8kmws" podUID=97688208-d9f8-418c-9a1d-43b2b84ea258
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360469    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmxc\" (UniqueName: \"kubernetes.io/projected/d6e540ed-42fd-43cb-8918-9f64a286c41e-kube-api-access-pcmxc\") pod \"kube-proxy-gs6f8\" (UID: \"d6e540ed-42fd-43cb-8918-9f64a286c41e\") " pod="kube-system/kube-proxy-gs6f8"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360517    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume\") pod \"coredns-6d4b75cb6d-8kmws\" (UID: \"97688208-d9f8-418c-9a1d-43b2b84ea258\") " pod="kube-system/coredns-6d4b75cb6d-8kmws"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360597    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6e540ed-42fd-43cb-8918-9f64a286c41e-kube-proxy\") pod \"kube-proxy-gs6f8\" (UID: \"d6e540ed-42fd-43cb-8918-9f64a286c41e\") " pod="kube-system/kube-proxy-gs6f8"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360660    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6e540ed-42fd-43cb-8918-9f64a286c41e-xtables-lock\") pod \"kube-proxy-gs6f8\" (UID: \"d6e540ed-42fd-43cb-8918-9f64a286c41e\") " pod="kube-system/kube-proxy-gs6f8"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360688    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/027d4c7f-1fe0-4e5b-a575-2629ddd8a147-tmp\") pod \"storage-provisioner\" (UID: \"027d4c7f-1fe0-4e5b-a575-2629ddd8a147\") " pod="kube-system/storage-provisioner"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360712    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6e540ed-42fd-43cb-8918-9f64a286c41e-lib-modules\") pod \"kube-proxy-gs6f8\" (UID: \"d6e540ed-42fd-43cb-8918-9f64a286c41e\") " pod="kube-system/kube-proxy-gs6f8"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360790    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27q2\" (UniqueName: \"kubernetes.io/projected/027d4c7f-1fe0-4e5b-a575-2629ddd8a147-kube-api-access-m27q2\") pod \"storage-provisioner\" (UID: \"027d4c7f-1fe0-4e5b-a575-2629ddd8a147\") " pod="kube-system/storage-provisioner"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360814    1099 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w4sp\" (UniqueName: \"kubernetes.io/projected/97688208-d9f8-418c-9a1d-43b2b84ea258-kube-api-access-6w4sp\") pod \"coredns-6d4b75cb6d-8kmws\" (UID: \"97688208-d9f8-418c-9a1d-43b2b84ea258\") " pod="kube-system/coredns-6d4b75cb6d-8kmws"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: I0131 02:57:05.360826    1099 reconciler.go:159] "Reconciler: start to sync state"
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: E0131 02:57:05.464533    1099 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: E0131 02:57:05.464617    1099 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume podName:97688208-d9f8-418c-9a1d-43b2b84ea258 nodeName:}" failed. No retries permitted until 2024-01-31 02:57:05.964589353 +0000 UTC m=+8.903087111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume") pod "coredns-6d4b75cb6d-8kmws" (UID: "97688208-d9f8-418c-9a1d-43b2b84ea258") : object "kube-system"/"coredns" not registered
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: E0131 02:57:05.968552    1099 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 31 02:57:05 test-preload-723521 kubelet[1099]: E0131 02:57:05.968640    1099 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume podName:97688208-d9f8-418c-9a1d-43b2b84ea258 nodeName:}" failed. No retries permitted until 2024-01-31 02:57:06.968623916 +0000 UTC m=+9.907121673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume") pod "coredns-6d4b75cb6d-8kmws" (UID: "97688208-d9f8-418c-9a1d-43b2b84ea258") : object "kube-system"/"coredns" not registered
	Jan 31 02:57:06 test-preload-723521 kubelet[1099]: E0131 02:57:06.978279    1099 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 31 02:57:06 test-preload-723521 kubelet[1099]: E0131 02:57:06.978348    1099 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume podName:97688208-d9f8-418c-9a1d-43b2b84ea258 nodeName:}" failed. No retries permitted until 2024-01-31 02:57:08.978333788 +0000 UTC m=+11.916831548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/97688208-d9f8-418c-9a1d-43b2b84ea258-config-volume") pod "coredns-6d4b75cb6d-8kmws" (UID: "97688208-d9f8-418c-9a1d-43b2b84ea258") : object "kube-system"/"coredns" not registered
	Jan 31 02:57:07 test-preload-723521 kubelet[1099]: I0131 02:57:07.363981    1099 scope.go:110] "RemoveContainer" containerID="29305a61526b91f2025bc59e5ec759fe56ad1d06771ae4a605cf6854dd0fa8da"
	
	
	==> storage-provisioner [29305a61526b91f2025bc59e5ec759fe56ad1d06771ae4a605cf6854dd0fa8da] <==
	I0131 02:57:06.666978       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0131 02:57:06.669868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [c659c4862f6511c03bbfe46869a157e3d2084620bc44e82b0cebd431f49f2df5] <==
	I0131 02:57:07.548548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 02:57:07.653551       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 02:57:07.653619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-723521 -n test-preload-723521
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-723521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-723521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-723521
--- FAIL: TestPreload (218.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (79.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-218490 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-218490 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.807027907s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-218490] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-218490 in cluster pause-218490
	* Updating the running kvm2 "pause-218490" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-218490" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 03:04:44.187862 1449733 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:04:44.188006 1449733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:04:44.188017 1449733 out.go:309] Setting ErrFile to fd 2...
	I0131 03:04:44.188024 1449733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:04:44.188246 1449733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:04:44.188851 1449733 out.go:303] Setting JSON to false
	I0131 03:04:44.189935 1449733 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28027,"bootTime":1706642257,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:04:44.190008 1449733 start.go:138] virtualization: kvm guest
	I0131 03:04:44.283046 1449733 out.go:177] * [pause-218490] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:04:44.408281 1449733 notify.go:220] Checking for updates...
	I0131 03:04:44.430600 1449733 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:04:44.493494 1449733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:04:44.518222 1449733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:04:44.727700 1449733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:04:44.811317 1449733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:04:44.832169 1449733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:04:44.895994 1449733 config.go:182] Loaded profile config "pause-218490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:04:44.896457 1449733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:04:44.896505 1449733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:04:44.912283 1449733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34651
	I0131 03:04:44.912841 1449733 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:04:44.913528 1449733 main.go:141] libmachine: Using API Version  1
	I0131 03:04:44.913561 1449733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:04:44.913969 1449733 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:04:44.914235 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:04:44.914552 1449733 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:04:44.914870 1449733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:04:44.914923 1449733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:04:44.930747 1449733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0131 03:04:44.931205 1449733 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:04:44.931746 1449733 main.go:141] libmachine: Using API Version  1
	I0131 03:04:44.931775 1449733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:04:44.932094 1449733 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:04:44.932336 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:04:45.067042 1449733 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:04:45.158742 1449733 start.go:298] selected driver: kvm2
	I0131 03:04:45.158778 1449733 start.go:902] validating driver "kvm2" against &{Name:pause-218490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-218490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:04:45.158980 1449733 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:04:45.159387 1449733 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:04:45.159474 1449733 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:04:45.181387 1449733 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:04:45.182437 1449733 cni.go:84] Creating CNI manager for ""
	I0131 03:04:45.182456 1449733 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:04:45.182469 1449733 start_flags.go:321] config:
	{Name:pause-218490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-218490 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:04:45.182763 1449733 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:04:45.311484 1449733 out.go:177] * Starting control plane node pause-218490 in cluster pause-218490
	I0131 03:04:45.396425 1449733 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:04:45.396517 1449733 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:04:45.396529 1449733 cache.go:56] Caching tarball of preloaded images
	I0131 03:04:45.396654 1449733 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:04:45.396665 1449733 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:04:45.396842 1449733 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/config.json ...
	I0131 03:04:45.397091 1449733 start.go:365] acquiring machines lock for pause-218490: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:05:27.203384 1449733 start.go:369] acquired machines lock for "pause-218490" in 41.806233095s
	I0131 03:05:27.203441 1449733 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:05:27.203451 1449733 fix.go:54] fixHost starting: 
	I0131 03:05:27.203910 1449733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:05:27.203964 1449733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:05:27.223336 1449733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I0131 03:05:27.223831 1449733 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:05:27.224463 1449733 main.go:141] libmachine: Using API Version  1
	I0131 03:05:27.224495 1449733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:05:27.225067 1449733 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:05:27.225269 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:27.225514 1449733 main.go:141] libmachine: (pause-218490) Calling .GetState
	I0131 03:05:27.227496 1449733 fix.go:102] recreateIfNeeded on pause-218490: state=Running err=<nil>
	W0131 03:05:27.227521 1449733 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:05:27.229703 1449733 out.go:177] * Updating the running kvm2 "pause-218490" VM ...
	I0131 03:05:27.231421 1449733 machine.go:88] provisioning docker machine ...
	I0131 03:05:27.231453 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:27.231674 1449733 main.go:141] libmachine: (pause-218490) Calling .GetMachineName
	I0131 03:05:27.231875 1449733 buildroot.go:166] provisioning hostname "pause-218490"
	I0131 03:05:27.231897 1449733 main.go:141] libmachine: (pause-218490) Calling .GetMachineName
	I0131 03:05:27.232057 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:27.234531 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.235019 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:27.235042 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.235240 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:27.235442 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.235622 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.235791 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:27.235987 1449733 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:27.236326 1449733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I0131 03:05:27.236341 1449733 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-218490 && echo "pause-218490" | sudo tee /etc/hostname
	I0131 03:05:27.382351 1449733 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-218490
	
	I0131 03:05:27.382388 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:27.385541 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.385891 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:27.385924 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.386124 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:27.386341 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.386475 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.386652 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:27.386854 1449733 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:27.387276 1449733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I0131 03:05:27.387305 1449733 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-218490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-218490/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-218490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:05:27.515685 1449733 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:05:27.515730 1449733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:05:27.515761 1449733 buildroot.go:174] setting up certificates
	I0131 03:05:27.515777 1449733 provision.go:83] configureAuth start
	I0131 03:05:27.515796 1449733 main.go:141] libmachine: (pause-218490) Calling .GetMachineName
	I0131 03:05:27.516135 1449733 main.go:141] libmachine: (pause-218490) Calling .GetIP
	I0131 03:05:27.519424 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.519924 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:27.519956 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.520141 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:27.522426 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.522759 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:27.522775 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.522981 1449733 provision.go:138] copyHostCerts
	I0131 03:05:27.523048 1449733 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:05:27.523058 1449733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:05:27.523114 1449733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:05:27.523195 1449733 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:05:27.523203 1449733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:05:27.523223 1449733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:05:27.523278 1449733 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:05:27.523285 1449733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:05:27.523302 1449733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:05:27.523360 1449733 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.pause-218490 san=[192.168.39.138 192.168.39.138 localhost 127.0.0.1 minikube pause-218490]
	I0131 03:05:27.576363 1449733 provision.go:172] copyRemoteCerts
	I0131 03:05:27.576418 1449733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:05:27.576446 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:27.579354 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.579738 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:27.579776 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.579962 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:27.580151 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.580307 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:27.580506 1449733 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/pause-218490/id_rsa Username:docker}
	I0131 03:05:27.678923 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:05:27.708028 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0131 03:05:27.733007 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:05:27.755524 1449733 provision.go:86] duration metric: configureAuth took 239.723932ms
	I0131 03:05:27.755571 1449733 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:05:27.755859 1449733 config.go:182] Loaded profile config "pause-218490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:27.755967 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:27.758908 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.759323 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:27.759353 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:27.759555 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:27.759759 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.759932 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:27.760132 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:27.760313 1449733 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:27.760634 1449733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I0131 03:05:27.760654 1449733 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:05:33.398238 1449733 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:05:33.398267 1449733 machine.go:91] provisioned docker machine in 6.166825282s
	I0131 03:05:33.398279 1449733 start.go:300] post-start starting for "pause-218490" (driver="kvm2")
	I0131 03:05:33.398290 1449733 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:05:33.398308 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:33.398716 1449733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:05:33.398748 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:33.401532 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.401980 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:33.402012 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.402174 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:33.402384 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:33.402559 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:33.402734 1449733 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/pause-218490/id_rsa Username:docker}
	I0131 03:05:33.496830 1449733 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:05:33.500696 1449733 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:05:33.500723 1449733 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:05:33.500787 1449733 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:05:33.500855 1449733 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:05:33.500956 1449733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:05:33.510334 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:05:33.532496 1449733 start.go:303] post-start completed in 134.197993ms
	I0131 03:05:33.532528 1449733 fix.go:56] fixHost completed within 6.329077818s
	I0131 03:05:33.532571 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:33.535304 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.535715 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:33.535744 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.535926 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:33.536171 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:33.536349 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:33.536545 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:33.536791 1449733 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:33.537148 1449733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I0131 03:05:33.537161 1449733 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0131 03:05:33.663604 1449733 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706670333.659361530
	
	I0131 03:05:33.663647 1449733 fix.go:206] guest clock: 1706670333.659361530
	I0131 03:05:33.663658 1449733 fix.go:219] Guest: 2024-01-31 03:05:33.65936153 +0000 UTC Remote: 2024-01-31 03:05:33.53253286 +0000 UTC m=+49.409783445 (delta=126.82867ms)
	I0131 03:05:33.663730 1449733 fix.go:190] guest clock delta is within tolerance: 126.82867ms
	I0131 03:05:33.663744 1449733 start.go:83] releasing machines lock for "pause-218490", held for 6.460324832s
	I0131 03:05:33.663789 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:33.664122 1449733 main.go:141] libmachine: (pause-218490) Calling .GetIP
	I0131 03:05:33.666993 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.667390 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:33.667424 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.667572 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:33.668208 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:33.668403 1449733 main.go:141] libmachine: (pause-218490) Calling .DriverName
	I0131 03:05:33.668502 1449733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:05:33.668548 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:33.668658 1449733 ssh_runner.go:195] Run: cat /version.json
	I0131 03:05:33.668692 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHHostname
	I0131 03:05:33.671540 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.671887 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:33.671927 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.671949 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.672118 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:33.672333 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:33.672510 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:33.672513 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:33.672541 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:33.672727 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHPort
	I0131 03:05:33.672715 1449733 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/pause-218490/id_rsa Username:docker}
	I0131 03:05:33.672879 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHKeyPath
	I0131 03:05:33.673033 1449733 main.go:141] libmachine: (pause-218490) Calling .GetSSHUsername
	I0131 03:05:33.673201 1449733 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/pause-218490/id_rsa Username:docker}
	I0131 03:05:33.805893 1449733 ssh_runner.go:195] Run: systemctl --version
	I0131 03:05:33.811999 1449733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:05:33.980910 1449733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:05:33.988522 1449733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:05:33.988624 1449733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:05:33.998586 1449733 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0131 03:05:33.998617 1449733 start.go:475] detecting cgroup driver to use...
	I0131 03:05:33.998711 1449733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:05:34.016474 1449733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:05:34.031883 1449733 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:05:34.031950 1449733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:05:34.049216 1449733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:05:34.064882 1449733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:05:34.241784 1449733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:05:34.396672 1449733 docker.go:233] disabling docker service ...
	I0131 03:05:34.396755 1449733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:05:34.412874 1449733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:05:34.427947 1449733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:05:34.585400 1449733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:05:34.761979 1449733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:05:34.777614 1449733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:05:34.795613 1449733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:05:34.795703 1449733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:05:34.805984 1449733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:05:34.806074 1449733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:05:34.816908 1449733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:05:34.839138 1449733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:05:34.850410 1449733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:05:34.861627 1449733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:05:34.872299 1449733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:05:34.882558 1449733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:05:35.017713 1449733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:05:36.821787 1449733 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.804030812s)
	I0131 03:05:36.821823 1449733 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:05:36.821880 1449733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:05:36.828258 1449733 start.go:543] Will wait 60s for crictl version
	I0131 03:05:36.828326 1449733 ssh_runner.go:195] Run: which crictl
	I0131 03:05:36.832195 1449733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:05:36.874561 1449733 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:05:36.874642 1449733 ssh_runner.go:195] Run: crio --version
	I0131 03:05:36.924534 1449733 ssh_runner.go:195] Run: crio --version
	I0131 03:05:36.973321 1449733 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:05:36.974781 1449733 main.go:141] libmachine: (pause-218490) Calling .GetIP
	I0131 03:05:36.977971 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:36.978391 1449733 main.go:141] libmachine: (pause-218490) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:75:91", ip: ""} in network mk-pause-218490: {Iface:virbr3 ExpiryTime:2024-01-31 04:03:52 +0000 UTC Type:0 Mac:52:54:00:ae:75:91 Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:pause-218490 Clientid:01:52:54:00:ae:75:91}
	I0131 03:05:36.978449 1449733 main.go:141] libmachine: (pause-218490) DBG | domain pause-218490 has defined IP address 192.168.39.138 and MAC address 52:54:00:ae:75:91 in network mk-pause-218490
	I0131 03:05:36.978659 1449733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:05:36.983138 1449733 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:05:36.983186 1449733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:05:37.039882 1449733 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:05:37.039914 1449733 crio.go:415] Images already preloaded, skipping extraction
	I0131 03:05:37.039979 1449733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:05:37.074563 1449733 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:05:37.074590 1449733 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:05:37.074658 1449733 ssh_runner.go:195] Run: crio config
	I0131 03:05:37.326791 1449733 cni.go:84] Creating CNI manager for ""
	I0131 03:05:37.326820 1449733 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:05:37.326845 1449733 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:05:37.326871 1449733 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.138 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-218490 NodeName:pause-218490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:05:37.327089 1449733 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-218490"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:05:37.327194 1449733 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-218490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-218490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:05:37.327267 1449733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:05:37.528030 1449733 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:05:37.528121 1449733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:05:37.562890 1449733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0131 03:05:37.609108 1449733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:05:37.677206 1449733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0131 03:05:37.703117 1449733 ssh_runner.go:195] Run: grep 192.168.39.138	control-plane.minikube.internal$ /etc/hosts
	I0131 03:05:37.710040 1449733 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490 for IP: 192.168.39.138
	I0131 03:05:37.710081 1449733 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:05:37.710270 1449733 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:05:37.710320 1449733 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:05:37.710411 1449733 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/client.key
	I0131 03:05:37.710510 1449733 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/apiserver.key.2fd7e72a
	I0131 03:05:37.710568 1449733 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/proxy-client.key
	I0131 03:05:37.710707 1449733 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:05:37.710745 1449733 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:05:37.710760 1449733 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:05:37.710794 1449733 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:05:37.710828 1449733 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:05:37.710859 1449733 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:05:37.710940 1449733 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:05:37.711787 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:05:37.753688 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:05:37.788469 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:05:37.829650 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/pause-218490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:05:37.873425 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:05:37.933572 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:05:37.977601 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:05:38.032625 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:05:38.075305 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:05:38.115988 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:05:38.156084 1449733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:05:38.197093 1449733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:05:38.224182 1449733 ssh_runner.go:195] Run: openssl version
	I0131 03:05:38.233052 1449733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:05:38.250809 1449733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:05:38.258878 1449733 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:05:38.258956 1449733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:05:38.267026 1449733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:05:38.283294 1449733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:05:38.306546 1449733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:05:38.315427 1449733 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:05:38.315506 1449733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:05:38.323904 1449733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:05:38.341390 1449733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:05:38.359145 1449733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:05:38.365976 1449733 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:05:38.366057 1449733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:05:38.376344 1449733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:05:38.396866 1449733 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:05:38.405825 1449733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:05:38.414833 1449733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:05:38.425788 1449733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:05:38.436566 1449733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:05:38.446711 1449733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:05:38.459868 1449733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:05:38.470437 1449733 kubeadm.go:404] StartCluster: {Name:pause-218490 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-218490 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:05:38.470628 1449733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:05:38.470718 1449733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:05:38.659269 1449733 cri.go:89] found id: "a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3"
	I0131 03:05:38.659315 1449733 cri.go:89] found id: "b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e"
	I0131 03:05:38.659329 1449733 cri.go:89] found id: "f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6"
	I0131 03:05:38.659336 1449733 cri.go:89] found id: "5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690"
	I0131 03:05:38.659342 1449733 cri.go:89] found id: "dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d"
	I0131 03:05:38.659349 1449733 cri.go:89] found id: "f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5"
	I0131 03:05:38.659354 1449733 cri.go:89] found id: ""
	I0131 03:05:38.659426 1449733 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-218490 -n pause-218490
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-218490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-218490 logs -n 25: (1.348825047s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-609081 stop           | minikube                  | jenkins | v1.26.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:02 UTC |
	| start   | -p stopped-upgrade-609081             | stopped-upgrade-609081    | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-317821 sudo           | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-317821                | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:02 UTC |
	| start   | -p NoKubernetes-317821                | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:03 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-331640             | running-upgrade-331640    | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:03 UTC |
	| start   | -p pause-218490 --memory=2048         | pause-218490              | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:04 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-609081             | stopped-upgrade-609081    | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:03 UTC |
	| ssh     | -p NoKubernetes-317821 sudo           | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-317821                | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:03 UTC |
	| start   | -p cert-expiration-897667             | cert-expiration-897667    | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:04 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-097545          | force-systemd-flag-097545 | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:05 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:04 UTC | 31 Jan 24 03:04 UTC |
	| start   | -p cert-options-430741                | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:04 UTC | 31 Jan 24 03:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-218490                       | pause-218490              | jenkins | v1.32.0 | 31 Jan 24 03:04 UTC | 31 Jan 24 03:05 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-097545 ssh cat     | force-systemd-flag-097545 | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-097545          | force-systemd-flag-097545 | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	| start   | -p auto-390748 --memory=3072          | auto-390748               | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-430741 ssh               | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-430741 -- sudo        | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-430741                | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	| start   | -p kindnet-390748                     | kindnet-390748            | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:05:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:05:50.979747 1450691 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:05:50.979901 1450691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:05:50.979912 1450691 out.go:309] Setting ErrFile to fd 2...
	I0131 03:05:50.979917 1450691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:05:50.980136 1450691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:05:50.980779 1450691 out.go:303] Setting JSON to false
	I0131 03:05:50.981856 1450691 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28094,"bootTime":1706642257,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:05:50.981927 1450691 start.go:138] virtualization: kvm guest
	I0131 03:05:50.984323 1450691 out.go:177] * [kindnet-390748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:05:50.985669 1450691 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:05:50.985737 1450691 notify.go:220] Checking for updates...
	I0131 03:05:50.987061 1450691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:05:50.988560 1450691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:05:50.989935 1450691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:05:50.991271 1450691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:05:50.992574 1450691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:05:50.994382 1450691 config.go:182] Loaded profile config "auto-390748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:50.994509 1450691 config.go:182] Loaded profile config "cert-expiration-897667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:50.994714 1450691 config.go:182] Loaded profile config "pause-218490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:50.994820 1450691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:05:51.034344 1450691 out.go:177] * Using the kvm2 driver based on user configuration
	I0131 03:05:51.035789 1450691 start.go:298] selected driver: kvm2
	I0131 03:05:51.035810 1450691 start.go:902] validating driver "kvm2" against <nil>
	I0131 03:05:51.035826 1450691 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:05:51.036648 1450691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:05:51.036760 1450691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:05:51.052976 1450691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:05:51.053074 1450691 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 03:05:51.053285 1450691 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:05:51.053336 1450691 cni.go:84] Creating CNI manager for "kindnet"
	I0131 03:05:51.053351 1450691 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0131 03:05:51.053363 1450691 start_flags.go:321] config:
	{Name:kindnet-390748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-390748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:05:51.053572 1450691 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:05:51.055295 1450691 out.go:177] * Starting control plane node kindnet-390748 in cluster kindnet-390748
	I0131 03:05:50.150118 1449733 pod_ready.go:92] pod "coredns-5dd5756b68-j6htm" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:50.150143 1449733 pod_ready.go:81] duration metric: took 3.507046186s waiting for pod "coredns-5dd5756b68-j6htm" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:50.150151 1449733 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:52.157911 1449733 pod_ready.go:102] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"False"
	I0131 03:05:50.784166 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:50.784685 1450261 main.go:141] libmachine: (auto-390748) DBG | unable to find current IP address of domain auto-390748 in network mk-auto-390748
	I0131 03:05:50.784743 1450261 main.go:141] libmachine: (auto-390748) DBG | I0131 03:05:50.784653 1450312 retry.go:31] will retry after 4.449193384s: waiting for machine to come up
	I0131 03:05:55.236439 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:55.237005 1450261 main.go:141] libmachine: (auto-390748) DBG | unable to find current IP address of domain auto-390748 in network mk-auto-390748
	I0131 03:05:55.237028 1450261 main.go:141] libmachine: (auto-390748) DBG | I0131 03:05:55.236951 1450312 retry.go:31] will retry after 3.742413695s: waiting for machine to come up
	I0131 03:05:51.056782 1450691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:05:51.056834 1450691 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:05:51.056850 1450691 cache.go:56] Caching tarball of preloaded images
	I0131 03:05:51.056945 1450691 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:05:51.056957 1450691 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:05:51.057071 1450691 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/config.json ...
	I0131 03:05:51.057094 1450691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/config.json: {Name:mk175207567b19a49c1bbd1dc9edf0a11435b550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:05:51.057280 1450691 start.go:365] acquiring machines lock for kindnet-390748: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:05:54.658075 1449733 pod_ready.go:102] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"False"
	I0131 03:05:56.659168 1449733 pod_ready.go:102] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"False"
	I0131 03:05:57.656336 1449733 pod_ready.go:92] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.656359 1449733 pod_ready.go:81] duration metric: took 7.506201252s waiting for pod "etcd-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.656370 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.661125 1449733 pod_ready.go:92] pod "kube-apiserver-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.661148 1449733 pod_ready.go:81] duration metric: took 4.766308ms waiting for pod "kube-apiserver-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.661157 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.665533 1449733 pod_ready.go:92] pod "kube-controller-manager-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.665552 1449733 pod_ready.go:81] duration metric: took 4.389518ms waiting for pod "kube-controller-manager-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.665560 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rvz49" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.670446 1449733 pod_ready.go:92] pod "kube-proxy-rvz49" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.670462 1449733 pod_ready.go:81] duration metric: took 4.897423ms waiting for pod "kube-proxy-rvz49" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.670470 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:58.177515 1449733 pod_ready.go:92] pod "kube-scheduler-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:58.177551 1449733 pod_ready.go:81] duration metric: took 507.073962ms waiting for pod "kube-scheduler-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:58.177577 1449733 pod_ready.go:38] duration metric: took 11.540488745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:05:58.177600 1449733 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:05:58.177671 1449733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:05:58.190331 1449733 api_server.go:72] duration metric: took 11.672357465s to wait for apiserver process to appear ...
	I0131 03:05:58.190361 1449733 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:05:58.190388 1449733 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I0131 03:05:58.197108 1449733 api_server.go:279] https://192.168.39.138:8443/healthz returned 200:
	ok
	I0131 03:05:58.198687 1449733 api_server.go:141] control plane version: v1.28.4
	I0131 03:05:58.198708 1449733 api_server.go:131] duration metric: took 8.33987ms to wait for apiserver health ...
	I0131 03:05:58.198717 1449733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:05:58.257526 1449733 system_pods.go:59] 6 kube-system pods found
	I0131 03:05:58.257555 1449733 system_pods.go:61] "coredns-5dd5756b68-j6htm" [f935de64-0599-41ac-9a8f-fa1b1fd507a2] Running
	I0131 03:05:58.257560 1449733 system_pods.go:61] "etcd-pause-218490" [5de7f78d-f2c2-4b1c-b12e-c9bd5a52ca47] Running
	I0131 03:05:58.257564 1449733 system_pods.go:61] "kube-apiserver-pause-218490" [dfd11e28-7501-4e4f-a1f2-92a0f8925ccc] Running
	I0131 03:05:58.257568 1449733 system_pods.go:61] "kube-controller-manager-pause-218490" [c8905a27-ad81-431b-826f-31d7b68dbdff] Running
	I0131 03:05:58.257572 1449733 system_pods.go:61] "kube-proxy-rvz49" [568ad034-cb61-44ba-9fd9-892cfd5b9fc6] Running
	I0131 03:05:58.257576 1449733 system_pods.go:61] "kube-scheduler-pause-218490" [1a0103e8-c834-4928-bff0-ed099e88bead] Running
	I0131 03:05:58.257582 1449733 system_pods.go:74] duration metric: took 58.8583ms to wait for pod list to return data ...
	I0131 03:05:58.257589 1449733 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:05:58.454976 1449733 default_sa.go:45] found service account: "default"
	I0131 03:05:58.455014 1449733 default_sa.go:55] duration metric: took 197.416785ms for default service account to be created ...
	I0131 03:05:58.455026 1449733 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:05:58.657213 1449733 system_pods.go:86] 6 kube-system pods found
	I0131 03:05:58.657244 1449733 system_pods.go:89] "coredns-5dd5756b68-j6htm" [f935de64-0599-41ac-9a8f-fa1b1fd507a2] Running
	I0131 03:05:58.657250 1449733 system_pods.go:89] "etcd-pause-218490" [5de7f78d-f2c2-4b1c-b12e-c9bd5a52ca47] Running
	I0131 03:05:58.657254 1449733 system_pods.go:89] "kube-apiserver-pause-218490" [dfd11e28-7501-4e4f-a1f2-92a0f8925ccc] Running
	I0131 03:05:58.657259 1449733 system_pods.go:89] "kube-controller-manager-pause-218490" [c8905a27-ad81-431b-826f-31d7b68dbdff] Running
	I0131 03:05:58.657264 1449733 system_pods.go:89] "kube-proxy-rvz49" [568ad034-cb61-44ba-9fd9-892cfd5b9fc6] Running
	I0131 03:05:58.657268 1449733 system_pods.go:89] "kube-scheduler-pause-218490" [1a0103e8-c834-4928-bff0-ed099e88bead] Running
	I0131 03:05:58.657274 1449733 system_pods.go:126] duration metric: took 202.24259ms to wait for k8s-apps to be running ...
	I0131 03:05:58.657280 1449733 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:05:58.657323 1449733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:05:58.671249 1449733 system_svc.go:56] duration metric: took 13.957513ms WaitForService to wait for kubelet.
	I0131 03:05:58.671278 1449733 kubeadm.go:581] duration metric: took 12.1533121s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:05:58.671299 1449733 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:05:58.855702 1449733 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:05:58.855734 1449733 node_conditions.go:123] node cpu capacity is 2
	I0131 03:05:58.855744 1449733 node_conditions.go:105] duration metric: took 184.440764ms to run NodePressure ...
	I0131 03:05:58.855757 1449733 start.go:228] waiting for startup goroutines ...
	I0131 03:05:58.855762 1449733 start.go:233] waiting for cluster config update ...
	I0131 03:05:58.855768 1449733 start.go:242] writing updated cluster config ...
	I0131 03:05:58.856089 1449733 ssh_runner.go:195] Run: rm -f paused
	I0131 03:05:58.907438 1449733 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:05:58.909481 1449733 out.go:177] * Done! kubectl is now configured to use "pause-218490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:03:48 UTC, ends at Wed 2024-01-31 03:05:59 UTC. --
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.646798565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670359646782813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=79a80256-47e2-4ea9-b38c-5c1a866b0819 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.647568227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b0308d19-b79e-4e42-8c4f-01303b0707dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.647613888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b0308d19-b79e-4e42-8c4f-01303b0707dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.647842801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b0308d19-b79e-4e42-8c4f-01303b0707dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.693266337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e37a52ca-4178-4625-96b6-093f0d27b6fe name=/runtime.v1.RuntimeService/Version
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.693346278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e37a52ca-4178-4625-96b6-093f0d27b6fe name=/runtime.v1.RuntimeService/Version
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.694602331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eb45a9a5-f070-45de-8475-8a36125529d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.694983411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670359694968428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=eb45a9a5-f070-45de-8475-8a36125529d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.695939870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=038c6c6a-a562-4f50-89c9-abbb9b9d1874 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.696009891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=038c6c6a-a562-4f50-89c9-abbb9b9d1874 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.696298494Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=038c6c6a-a562-4f50-89c9-abbb9b9d1874 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.739918359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b2e1e2d7-044e-4ef9-8e03-6a91306fbf28 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.740001460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b2e1e2d7-044e-4ef9-8e03-6a91306fbf28 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.741709253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9373ab6d-a5ac-4abc-be6d-a04a349cead3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.742427897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670359742408533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=9373ab6d-a5ac-4abc-be6d-a04a349cead3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.743561018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d64cc28-da58-4da4-b86c-7eef2a6a093b name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.743629379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d64cc28-da58-4da4-b86c-7eef2a6a093b name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.743890202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d64cc28-da58-4da4-b86c-7eef2a6a093b name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.789221971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bc00b3fb-91eb-450d-8b44-7e3f3a2cc280 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.789281284Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bc00b3fb-91eb-450d-8b44-7e3f3a2cc280 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.790560265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=06a4fd49-3911-48b5-84f4-92d655ca33e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.791141492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670359791068514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=06a4fd49-3911-48b5-84f4-92d655ca33e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.791645956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fc2253c0-cd9f-42f3-97cb-2621f5cd09ef name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.791691039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fc2253c0-cd9f-42f3-97cb-2621f5cd09ef name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:05:59 pause-218490 crio[2113]: time="2024-01-31 03:05:59.791915226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fc2253c0-cd9f-42f3-97cb-2621f5cd09ef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	67cbe72757ad0       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   18 seconds ago       Running             kube-proxy                1                   b16d323746f60       kube-proxy-rvz49
	1087598c8dc64       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago       Running             coredns                   1                   24cca48a7ef5a       coredns-5dd5756b68-j6htm
	ae09eb69a10d5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   20 seconds ago       Running             kube-scheduler            1                   3bf3e9e4a8fa3       kube-scheduler-pause-218490
	93f30be8cfed2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   20 seconds ago       Running             etcd                      1                   49ca164101587       etcd-pause-218490
	31b1d5b911ed3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   21 seconds ago       Running             kube-apiserver            1                   fb8a3b325d407       kube-apiserver-pause-218490
	69db172d16380       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   21 seconds ago       Running             kube-controller-manager   1                   5561a2faf9f90       kube-controller-manager-pause-218490
	a5bb008b08bd6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   259fa81edfc61       coredns-5dd5756b68-j6htm
	b94b4b01351b2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   About a minute ago   Exited              kube-proxy                0                   6c2132152ae05       kube-proxy-rvz49
	f9309037c8bf5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   About a minute ago   Exited              kube-scheduler            0                   e85b9c94b06e9       kube-scheduler-pause-218490
	5d659716881e1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   4b6bd2fdd95b6       etcd-pause-218490
	dbf09a5b997ad       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   About a minute ago   Exited              kube-apiserver            0                   c66f125826c13       kube-apiserver-pause-218490
	f17b87188e305       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   About a minute ago   Exited              kube-controller-manager   0                   97dd4a0bb5486       kube-controller-manager-pause-218490
	
	
	==> coredns [1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51145 - 44375 "HINFO IN 3563477475614635631.1897945761809634163. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011492199s
	
	
	==> coredns [a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-218490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-218490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=pause-218490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_04_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-218490
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.138
	  Hostname:    pause-218490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb5e2773c7964632a7bc7d6497b9b7d6
	  System UUID:                cb5e2773-c796-4632-a7bc-7d6497b9b7d6
	  Boot ID:                    46375ca2-f694-4c86-95a4-f9a660800530
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j6htm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     81s
	  kube-system                 etcd-pause-218490                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-pause-218490             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-218490    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-rvz49                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-scheduler-pause-218490             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 78s                  kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node pause-218490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node pause-218490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node pause-218490 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  Starting                 94s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s                  kubelet          Node pause-218490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                  kubelet          Node pause-218490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet          Node pause-218490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                93s                  kubelet          Node pause-218490 status is now: NodeReady
	  Normal  RegisteredNode           82s                  node-controller  Node pause-218490 event: Registered Node pause-218490 in Controller
	  Normal  RegisteredNode           3s                   node-controller  Node pause-218490 event: Registered Node pause-218490 in Controller
	
	
	==> dmesg <==
	[Jan31 03:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064390] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.289009] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.548793] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136683] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.992343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan31 03:04] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.103704] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.165000] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.126937] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.206128] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +10.175917] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +8.837966] systemd-fstab-generator[1262]: Ignoring "noauto" for root device
	[Jan31 03:05] systemd-fstab-generator[2038]: Ignoring "noauto" for root device
	[  +0.152742] systemd-fstab-generator[2049]: Ignoring "noauto" for root device
	[  +0.182263] systemd-fstab-generator[2062]: Ignoring "noauto" for root device
	[  +0.161014] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[  +0.290003] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	[  +2.263264] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690] <==
	{"level":"info","ts":"2024-01-31T03:04:21.407556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:04:21.407594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fdd267ffc1b7c75a elected leader fdd267ffc1b7c75a at term 2"}
	{"level":"info","ts":"2024-01-31T03:04:21.409199Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.41038Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fdd267ffc1b7c75a","local-member-attributes":"{Name:pause-218490 ClientURLs:[https://192.168.39.138:2379]}","request-path":"/0/members/fdd267ffc1b7c75a/attributes","cluster-id":"63b27a6ce7f4c58a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:04:21.410496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:04:21.411075Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63b27a6ce7f4c58a","local-member-id":"fdd267ffc1b7c75a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.411222Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.411272Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.411327Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:04:21.412259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.138:2379"}
	{"level":"info","ts":"2024-01-31T03:04:21.412932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:04:21.415433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:04:21.415503Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	WARNING: 2024/01/31 03:04:26 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-01-31T03:04:44.105373Z","caller":"traceutil/trace.go:171","msg":"trace[202895530] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"240.871782ms","start":"2024-01-31T03:04:43.864466Z","end":"2024-01-31T03:04:44.105338Z","steps":["trace[202895530] 'process raft request'  (duration: 240.332855ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-31T03:05:27.92534Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-31T03:05:27.925505Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-218490","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"]}
	{"level":"warn","ts":"2024-01-31T03:05:27.925633Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-31T03:05:27.925866Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-31T03:05:28.015453Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.138:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-31T03:05:28.015567Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.138:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-31T03:05:28.0173Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fdd267ffc1b7c75a","current-leader-member-id":"fdd267ffc1b7c75a"}
	{"level":"info","ts":"2024-01-31T03:05:28.020491Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:28.020676Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:28.020719Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-218490","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"]}
	
	
	==> etcd [93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5] <==
	{"level":"info","ts":"2024-01-31T03:05:41.46367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T03:05:41.463677Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T03:05:41.463945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a switched to configuration voters=(18289795384869373786)"}
	{"level":"info","ts":"2024-01-31T03:05:41.464047Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"63b27a6ce7f4c58a","local-member-id":"fdd267ffc1b7c75a","added-peer-id":"fdd267ffc1b7c75a","added-peer-peer-urls":["https://192.168.39.138:2380"]}
	{"level":"info","ts":"2024-01-31T03:05:41.464257Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63b27a6ce7f4c58a","local-member-id":"fdd267ffc1b7c75a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:05:41.46437Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:05:41.46631Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-31T03:05:41.466553Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fdd267ffc1b7c75a","initial-advertise-peer-urls":["https://192.168.39.138:2380"],"listen-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T03:05:41.466584Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T03:05:41.466637Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:41.466642Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:43.343368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-31T03:05:43.343491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:05:43.343548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a received MsgPreVoteResp from fdd267ffc1b7c75a at term 2"}
	{"level":"info","ts":"2024-01-31T03:05:43.343583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became candidate at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.343607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a received MsgVoteResp from fdd267ffc1b7c75a at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.343634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became leader at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.343676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fdd267ffc1b7c75a elected leader fdd267ffc1b7c75a at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.345383Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fdd267ffc1b7c75a","local-member-attributes":"{Name:pause-218490 ClientURLs:[https://192.168.39.138:2379]}","request-path":"/0/members/fdd267ffc1b7c75a/attributes","cluster-id":"63b27a6ce7f4c58a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:05:43.34543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:05:43.345721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:05:43.347027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:05:43.347233Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:05:43.347272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:05:43.347386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.138:2379"}
	
	
	==> kernel <==
	 03:06:00 up 2 min,  0 users,  load average: 0.58, 0.25, 0.09
	Linux pause-218490 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30] <==
	I0131 03:05:45.004598       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0131 03:05:45.063960       1 controller.go:134] Starting OpenAPI controller
	I0131 03:05:45.064298       1 controller.go:85] Starting OpenAPI V3 controller
	I0131 03:05:45.064601       1 naming_controller.go:291] Starting NamingConditionController
	I0131 03:05:45.064636       1 establishing_controller.go:76] Starting EstablishingController
	I0131 03:05:45.064854       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0131 03:05:45.065068       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0131 03:05:45.065092       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0131 03:05:45.207497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0131 03:05:45.208712       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0131 03:05:45.208812       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0131 03:05:45.218411       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0131 03:05:45.218462       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0131 03:05:45.218492       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0131 03:05:45.218550       1 shared_informer.go:318] Caches are synced for configmaps
	I0131 03:05:45.223298       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0131 03:05:45.234313       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0131 03:05:45.234469       1 aggregator.go:166] initial CRD sync complete...
	I0131 03:05:45.234520       1 autoregister_controller.go:141] Starting autoregister controller
	I0131 03:05:45.234550       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0131 03:05:45.234579       1 cache.go:39] Caches are synced for autoregister controller
	E0131 03:05:45.275238       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0131 03:05:46.011987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0131 03:05:57.454947       1 controller.go:624] quota admission added evaluator for: endpoints
	I0131 03:05:57.487082       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d] <==
	I0131 03:04:24.926370       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0131 03:04:24.977264       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0131 03:04:25.123515       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0131 03:04:25.140795       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.138]
	I0131 03:04:25.142308       1 controller.go:624] quota admission added evaluator for: endpoints
	I0131 03:04:25.149884       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0131 03:04:25.182417       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	E0131 03:04:26.571877       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0131 03:04:26.571939       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0131 03:04:26.571963       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.01µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0131 03:04:26.573280       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0131 03:04:26.573387       1 timeout.go:142] post-timeout activity - time-elapsed: 1.55604ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	I0131 03:04:26.643572       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0131 03:04:26.669144       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0131 03:04:26.687870       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0131 03:04:38.281052       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0131 03:04:38.836322       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0131 03:05:27.932402       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0131 03:05:27.932983       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933032       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933060       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933094       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933606       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933726       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933914       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333] <==
	I0131 03:05:57.494616       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0131 03:05:57.496067       1 shared_informer.go:318] Caches are synced for job
	I0131 03:05:57.496224       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0131 03:05:57.498364       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0131 03:05:57.498476       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0131 03:05:57.503315       1 shared_informer.go:318] Caches are synced for ephemeral
	I0131 03:05:57.504481       1 shared_informer.go:318] Caches are synced for PVC protection
	I0131 03:05:57.504588       1 shared_informer.go:318] Caches are synced for service account
	I0131 03:05:57.507831       1 shared_informer.go:318] Caches are synced for daemon sets
	I0131 03:05:57.507943       1 shared_informer.go:318] Caches are synced for PV protection
	I0131 03:05:57.517216       1 shared_informer.go:318] Caches are synced for HPA
	I0131 03:05:57.520521       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0131 03:05:57.523929       1 shared_informer.go:318] Caches are synced for GC
	I0131 03:05:57.528247       1 shared_informer.go:318] Caches are synced for TTL
	I0131 03:05:57.531573       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0131 03:05:57.531781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.23µs"
	I0131 03:05:57.555941       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0131 03:05:57.581559       1 shared_informer.go:318] Caches are synced for deployment
	I0131 03:05:57.628772       1 shared_informer.go:318] Caches are synced for cronjob
	I0131 03:05:57.638786       1 shared_informer.go:318] Caches are synced for disruption
	I0131 03:05:57.691664       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:05:57.694720       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:05:58.037058       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:05:58.056296       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:05:58.056388       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5] <==
	I0131 03:04:38.151023       1 shared_informer.go:318] Caches are synced for service account
	I0131 03:04:38.154600       1 range_allocator.go:380] "Set node PodCIDR" node="pause-218490" podCIDRs=["10.244.0.0/24"]
	I0131 03:04:38.163006       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:04:38.176996       1 shared_informer.go:318] Caches are synced for crt configmap
	I0131 03:04:38.202423       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:04:38.230256       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0131 03:04:38.288731       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0131 03:04:38.581678       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:04:38.632519       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:04:38.632647       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0131 03:04:38.867279       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rvz49"
	I0131 03:04:39.060510       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0131 03:04:39.089707       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-h9xfp"
	I0131 03:04:39.140631       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j6htm"
	I0131 03:04:39.232137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="942.50645ms"
	I0131 03:04:39.270277       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-h9xfp"
	I0131 03:04:39.365075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="132.496231ms"
	I0131 03:04:39.433132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.440956ms"
	I0131 03:04:39.436917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.622248ms"
	I0131 03:04:40.953797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.804µs"
	I0131 03:04:40.977005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.234µs"
	I0131 03:04:40.982248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="225.318µs"
	I0131 03:04:41.966265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.157451ms"
	I0131 03:04:42.017802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.65637ms"
	I0131 03:04:42.023716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="369.933µs"
	
	
	==> kube-proxy [67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735] <==
	I0131 03:05:42.087850       1 server_others.go:69] "Using iptables proxy"
	I0131 03:05:45.202386       1 node.go:141] Successfully retrieved node IP: 192.168.39.138
	I0131 03:05:45.507248       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:05:45.507768       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:05:45.528566       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:05:45.528667       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:05:45.528924       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:05:45.528937       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:05:45.534244       1 config.go:188] "Starting service config controller"
	I0131 03:05:45.534375       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:05:45.534502       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:05:45.535089       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:05:45.539697       1 config.go:315] "Starting node config controller"
	I0131 03:05:45.540768       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:05:45.635486       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:05:45.635627       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:05:45.642036       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e] <==
	I0131 03:04:41.442689       1 server_others.go:69] "Using iptables proxy"
	I0131 03:04:41.464237       1 node.go:141] Successfully retrieved node IP: 192.168.39.138
	I0131 03:04:41.513790       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:04:41.513859       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:04:41.516773       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:04:41.517647       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:04:41.518340       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:04:41.518387       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:04:41.521666       1 config.go:188] "Starting service config controller"
	I0131 03:04:41.522290       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:04:41.522369       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:04:41.522379       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:04:41.526988       1 config.go:315] "Starting node config controller"
	I0131 03:04:41.527110       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:04:41.623430       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:04:41.623551       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:04:41.628313       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a] <==
	I0131 03:05:42.158260       1 serving.go:348] Generated self-signed cert in-memory
	W0131 03:05:45.180864       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0131 03:05:45.180996       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 03:05:45.181014       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0131 03:05:45.181109       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0131 03:05:45.229267       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0131 03:05:45.229363       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:05:45.238479       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0131 03:05:45.239728       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0131 03:05:45.239827       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 03:05:45.239884       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0131 03:05:45.340241       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6] <==
	W0131 03:04:24.124307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:04:24.124406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:04:24.388651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:04:24.388743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0131 03:04:24.412359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:04:24.412447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 03:04:24.435597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:04:24.435687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 03:04:24.457003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:04:24.457091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0131 03:04:24.505961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 03:04:24.506125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 03:04:24.571295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:04:24.571356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:04:24.603543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:04:24.603604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:04:24.615991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 03:04:24.616052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0131 03:04:24.729975       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:04:24.730108       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0131 03:04:27.747250       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 03:05:27.927268       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0131 03:05:27.927592       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0131 03:05:27.927780       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0131 03:05:27.933835       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:03:48 UTC, ends at Wed 2024-01-31 03:06:00 UTC. --
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.176847    1269 status_manager.go:853] "Failed to get status for pod" podUID="e6273bd781ddb25868058b06d2dc5b10" pod="kube-system/etcd-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.196658    1269 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.197512    1269 status_manager.go:853] "Failed to get status for pod" podUID="16733bb4cb57d3074ea85378bd0c43b7" pod="kube-system/kube-apiserver-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.197750    1269 status_manager.go:853] "Failed to get status for pod" podUID="47a5ac2f6510a9d901d1e134669c82d9" pod="kube-system/kube-controller-manager-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.197965    1269 status_manager.go:853] "Failed to get status for pod" podUID="2d5e681347ce1546f8e16f3648243b28" pod="kube-system/kube-scheduler-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.198321    1269 status_manager.go:853] "Failed to get status for pod" podUID="568ad034-cb61-44ba-9fd9-892cfd5b9fc6" pod="kube-system/kube-proxy-rvz49" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rvz49\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.198599    1269 status_manager.go:853] "Failed to get status for pod" podUID="f935de64-0599-41ac-9a8f-fa1b1fd507a2" pod="kube-system/coredns-5dd5756b68-j6htm" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-j6htm\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.198895    1269 status_manager.go:853] "Failed to get status for pod" podUID="e6273bd781ddb25868058b06d2dc5b10" pod="kube-system/etcd-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.285857    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286114    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286459    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286637    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286884    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: I0131 03:05:38.286919    1269 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.287076    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="200ms"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.488009    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="400ms"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532085    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532414    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532600    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532738    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532893    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532904    1269 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.889860    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="800ms"
	Jan 31 03:05:39 pause-218490 kubelet[1269]: E0131 03:05:39.691510    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="1.6s"
	Jan 31 03:05:45 pause-218490 kubelet[1269]: E0131 03:05:45.139694    1269 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-218490 -n pause-218490
helpers_test.go:261: (dbg) Run:  kubectl --context pause-218490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-218490 -n pause-218490
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-218490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-218490 logs -n 25: (1.570467555s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-609081 stop           | minikube                  | jenkins | v1.26.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:02 UTC |
	| start   | -p stopped-upgrade-609081             | stopped-upgrade-609081    | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:03 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-317821 sudo           | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-317821                | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:02 UTC |
	| start   | -p NoKubernetes-317821                | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:02 UTC | 31 Jan 24 03:03 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-331640             | running-upgrade-331640    | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:03 UTC |
	| start   | -p pause-218490 --memory=2048         | pause-218490              | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:04 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-609081             | stopped-upgrade-609081    | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:03 UTC |
	| ssh     | -p NoKubernetes-317821 sudo           | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-317821                | NoKubernetes-317821       | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:03 UTC |
	| start   | -p cert-expiration-897667             | cert-expiration-897667    | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:04 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-097545          | force-systemd-flag-097545 | jenkins | v1.32.0 | 31 Jan 24 03:03 UTC | 31 Jan 24 03:05 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-278852          | kubernetes-upgrade-278852 | jenkins | v1.32.0 | 31 Jan 24 03:04 UTC | 31 Jan 24 03:04 UTC |
	| start   | -p cert-options-430741                | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:04 UTC | 31 Jan 24 03:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-218490                       | pause-218490              | jenkins | v1.32.0 | 31 Jan 24 03:04 UTC | 31 Jan 24 03:05 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-097545 ssh cat     | force-systemd-flag-097545 | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-097545          | force-systemd-flag-097545 | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	| start   | -p auto-390748 --memory=3072          | auto-390748               | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-430741 ssh               | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-430741 -- sudo        | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-430741                | cert-options-430741       | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC | 31 Jan 24 03:05 UTC |
	| start   | -p kindnet-390748                     | kindnet-390748            | jenkins | v1.32.0 | 31 Jan 24 03:05 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:05:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:05:50.979747 1450691 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:05:50.979901 1450691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:05:50.979912 1450691 out.go:309] Setting ErrFile to fd 2...
	I0131 03:05:50.979917 1450691 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:05:50.980136 1450691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:05:50.980779 1450691 out.go:303] Setting JSON to false
	I0131 03:05:50.981856 1450691 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28094,"bootTime":1706642257,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:05:50.981927 1450691 start.go:138] virtualization: kvm guest
	I0131 03:05:50.984323 1450691 out.go:177] * [kindnet-390748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:05:50.985669 1450691 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:05:50.985737 1450691 notify.go:220] Checking for updates...
	I0131 03:05:50.987061 1450691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:05:50.988560 1450691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:05:50.989935 1450691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:05:50.991271 1450691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:05:50.992574 1450691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:05:50.994382 1450691 config.go:182] Loaded profile config "auto-390748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:50.994509 1450691 config.go:182] Loaded profile config "cert-expiration-897667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:50.994714 1450691 config.go:182] Loaded profile config "pause-218490": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:05:50.994820 1450691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:05:51.034344 1450691 out.go:177] * Using the kvm2 driver based on user configuration
	I0131 03:05:51.035789 1450691 start.go:298] selected driver: kvm2
	I0131 03:05:51.035810 1450691 start.go:902] validating driver "kvm2" against <nil>
	I0131 03:05:51.035826 1450691 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:05:51.036648 1450691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:05:51.036760 1450691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:05:51.052976 1450691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:05:51.053074 1450691 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 03:05:51.053285 1450691 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:05:51.053336 1450691 cni.go:84] Creating CNI manager for "kindnet"
	I0131 03:05:51.053351 1450691 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0131 03:05:51.053363 1450691 start_flags.go:321] config:
	{Name:kindnet-390748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-390748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:05:51.053572 1450691 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:05:51.055295 1450691 out.go:177] * Starting control plane node kindnet-390748 in cluster kindnet-390748
	I0131 03:05:50.150118 1449733 pod_ready.go:92] pod "coredns-5dd5756b68-j6htm" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:50.150143 1449733 pod_ready.go:81] duration metric: took 3.507046186s waiting for pod "coredns-5dd5756b68-j6htm" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:50.150151 1449733 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:52.157911 1449733 pod_ready.go:102] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"False"
	I0131 03:05:50.784166 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:50.784685 1450261 main.go:141] libmachine: (auto-390748) DBG | unable to find current IP address of domain auto-390748 in network mk-auto-390748
	I0131 03:05:50.784743 1450261 main.go:141] libmachine: (auto-390748) DBG | I0131 03:05:50.784653 1450312 retry.go:31] will retry after 4.449193384s: waiting for machine to come up
	I0131 03:05:55.236439 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:55.237005 1450261 main.go:141] libmachine: (auto-390748) DBG | unable to find current IP address of domain auto-390748 in network mk-auto-390748
	I0131 03:05:55.237028 1450261 main.go:141] libmachine: (auto-390748) DBG | I0131 03:05:55.236951 1450312 retry.go:31] will retry after 3.742413695s: waiting for machine to come up
	I0131 03:05:51.056782 1450691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:05:51.056834 1450691 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:05:51.056850 1450691 cache.go:56] Caching tarball of preloaded images
	I0131 03:05:51.056945 1450691 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:05:51.056957 1450691 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:05:51.057071 1450691 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/config.json ...
	I0131 03:05:51.057094 1450691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/config.json: {Name:mk175207567b19a49c1bbd1dc9edf0a11435b550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:05:51.057280 1450691 start.go:365] acquiring machines lock for kindnet-390748: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:05:54.658075 1449733 pod_ready.go:102] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"False"
	I0131 03:05:56.659168 1449733 pod_ready.go:102] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"False"
	I0131 03:05:57.656336 1449733 pod_ready.go:92] pod "etcd-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.656359 1449733 pod_ready.go:81] duration metric: took 7.506201252s waiting for pod "etcd-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.656370 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.661125 1449733 pod_ready.go:92] pod "kube-apiserver-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.661148 1449733 pod_ready.go:81] duration metric: took 4.766308ms waiting for pod "kube-apiserver-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.661157 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.665533 1449733 pod_ready.go:92] pod "kube-controller-manager-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.665552 1449733 pod_ready.go:81] duration metric: took 4.389518ms waiting for pod "kube-controller-manager-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.665560 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rvz49" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.670446 1449733 pod_ready.go:92] pod "kube-proxy-rvz49" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:57.670462 1449733 pod_ready.go:81] duration metric: took 4.897423ms waiting for pod "kube-proxy-rvz49" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:57.670470 1449733 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:58.177515 1449733 pod_ready.go:92] pod "kube-scheduler-pause-218490" in "kube-system" namespace has status "Ready":"True"
	I0131 03:05:58.177551 1449733 pod_ready.go:81] duration metric: took 507.073962ms waiting for pod "kube-scheduler-pause-218490" in "kube-system" namespace to be "Ready" ...
	I0131 03:05:58.177577 1449733 pod_ready.go:38] duration metric: took 11.540488745s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:05:58.177600 1449733 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:05:58.177671 1449733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:05:58.190331 1449733 api_server.go:72] duration metric: took 11.672357465s to wait for apiserver process to appear ...
	I0131 03:05:58.190361 1449733 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:05:58.190388 1449733 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I0131 03:05:58.197108 1449733 api_server.go:279] https://192.168.39.138:8443/healthz returned 200:
	ok
	I0131 03:05:58.198687 1449733 api_server.go:141] control plane version: v1.28.4
	I0131 03:05:58.198708 1449733 api_server.go:131] duration metric: took 8.33987ms to wait for apiserver health ...
	I0131 03:05:58.198717 1449733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:05:58.257526 1449733 system_pods.go:59] 6 kube-system pods found
	I0131 03:05:58.257555 1449733 system_pods.go:61] "coredns-5dd5756b68-j6htm" [f935de64-0599-41ac-9a8f-fa1b1fd507a2] Running
	I0131 03:05:58.257560 1449733 system_pods.go:61] "etcd-pause-218490" [5de7f78d-f2c2-4b1c-b12e-c9bd5a52ca47] Running
	I0131 03:05:58.257564 1449733 system_pods.go:61] "kube-apiserver-pause-218490" [dfd11e28-7501-4e4f-a1f2-92a0f8925ccc] Running
	I0131 03:05:58.257568 1449733 system_pods.go:61] "kube-controller-manager-pause-218490" [c8905a27-ad81-431b-826f-31d7b68dbdff] Running
	I0131 03:05:58.257572 1449733 system_pods.go:61] "kube-proxy-rvz49" [568ad034-cb61-44ba-9fd9-892cfd5b9fc6] Running
	I0131 03:05:58.257576 1449733 system_pods.go:61] "kube-scheduler-pause-218490" [1a0103e8-c834-4928-bff0-ed099e88bead] Running
	I0131 03:05:58.257582 1449733 system_pods.go:74] duration metric: took 58.8583ms to wait for pod list to return data ...
	I0131 03:05:58.257589 1449733 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:05:58.454976 1449733 default_sa.go:45] found service account: "default"
	I0131 03:05:58.455014 1449733 default_sa.go:55] duration metric: took 197.416785ms for default service account to be created ...
	I0131 03:05:58.455026 1449733 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:05:58.657213 1449733 system_pods.go:86] 6 kube-system pods found
	I0131 03:05:58.657244 1449733 system_pods.go:89] "coredns-5dd5756b68-j6htm" [f935de64-0599-41ac-9a8f-fa1b1fd507a2] Running
	I0131 03:05:58.657250 1449733 system_pods.go:89] "etcd-pause-218490" [5de7f78d-f2c2-4b1c-b12e-c9bd5a52ca47] Running
	I0131 03:05:58.657254 1449733 system_pods.go:89] "kube-apiserver-pause-218490" [dfd11e28-7501-4e4f-a1f2-92a0f8925ccc] Running
	I0131 03:05:58.657259 1449733 system_pods.go:89] "kube-controller-manager-pause-218490" [c8905a27-ad81-431b-826f-31d7b68dbdff] Running
	I0131 03:05:58.657264 1449733 system_pods.go:89] "kube-proxy-rvz49" [568ad034-cb61-44ba-9fd9-892cfd5b9fc6] Running
	I0131 03:05:58.657268 1449733 system_pods.go:89] "kube-scheduler-pause-218490" [1a0103e8-c834-4928-bff0-ed099e88bead] Running
	I0131 03:05:58.657274 1449733 system_pods.go:126] duration metric: took 202.24259ms to wait for k8s-apps to be running ...
	I0131 03:05:58.657280 1449733 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:05:58.657323 1449733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:05:58.671249 1449733 system_svc.go:56] duration metric: took 13.957513ms WaitForService to wait for kubelet.
	I0131 03:05:58.671278 1449733 kubeadm.go:581] duration metric: took 12.1533121s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:05:58.671299 1449733 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:05:58.855702 1449733 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:05:58.855734 1449733 node_conditions.go:123] node cpu capacity is 2
	I0131 03:05:58.855744 1449733 node_conditions.go:105] duration metric: took 184.440764ms to run NodePressure ...
	I0131 03:05:58.855757 1449733 start.go:228] waiting for startup goroutines ...
	I0131 03:05:58.855762 1449733 start.go:233] waiting for cluster config update ...
	I0131 03:05:58.855768 1449733 start.go:242] writing updated cluster config ...
	I0131 03:05:58.856089 1449733 ssh_runner.go:195] Run: rm -f paused
	I0131 03:05:58.907438 1449733 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:05:58.909481 1449733 out.go:177] * Done! kubectl is now configured to use "pause-218490" cluster and "default" namespace by default
	I0131 03:05:58.981016 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:58.981534 1450261 main.go:141] libmachine: (auto-390748) Found IP for machine: 192.168.50.117
	I0131 03:05:58.981559 1450261 main.go:141] libmachine: (auto-390748) Reserving static IP address...
	I0131 03:05:58.981574 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has current primary IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:58.982034 1450261 main.go:141] libmachine: (auto-390748) DBG | unable to find host DHCP lease matching {name: "auto-390748", mac: "52:54:00:ef:9b:9f", ip: "192.168.50.117"} in network mk-auto-390748
	I0131 03:05:59.074262 1450261 main.go:141] libmachine: (auto-390748) DBG | Getting to WaitForSSH function...
	I0131 03:05:59.074296 1450261 main.go:141] libmachine: (auto-390748) Reserved static IP address: 192.168.50.117
	I0131 03:05:59.074312 1450261 main.go:141] libmachine: (auto-390748) Waiting for SSH to be available...
	I0131 03:05:59.077593 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.078078 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.078112 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.078281 1450261 main.go:141] libmachine: (auto-390748) DBG | Using SSH client type: external
	I0131 03:05:59.078307 1450261 main.go:141] libmachine: (auto-390748) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/auto-390748/id_rsa (-rw-------)
	I0131 03:05:59.078338 1450261 main.go:141] libmachine: (auto-390748) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/auto-390748/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:05:59.078354 1450261 main.go:141] libmachine: (auto-390748) DBG | About to run SSH command:
	I0131 03:05:59.078371 1450261 main.go:141] libmachine: (auto-390748) DBG | exit 0
	I0131 03:05:59.182317 1450261 main.go:141] libmachine: (auto-390748) DBG | SSH cmd err, output: <nil>: 
	I0131 03:05:59.182610 1450261 main.go:141] libmachine: (auto-390748) KVM machine creation complete!
	I0131 03:05:59.182992 1450261 main.go:141] libmachine: (auto-390748) Calling .GetConfigRaw
	I0131 03:05:59.183597 1450261 main.go:141] libmachine: (auto-390748) Calling .DriverName
	I0131 03:05:59.183811 1450261 main.go:141] libmachine: (auto-390748) Calling .DriverName
	I0131 03:05:59.184018 1450261 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0131 03:05:59.184038 1450261 main.go:141] libmachine: (auto-390748) Calling .GetState
	I0131 03:05:59.185474 1450261 main.go:141] libmachine: Detecting operating system of created instance...
	I0131 03:05:59.185493 1450261 main.go:141] libmachine: Waiting for SSH to be available...
	I0131 03:05:59.185508 1450261 main.go:141] libmachine: Getting to WaitForSSH function...
	I0131 03:05:59.185522 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:05:59.188219 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.188683 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.188717 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.188906 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHPort
	I0131 03:05:59.189113 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.189295 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.189477 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHUsername
	I0131 03:05:59.189674 1450261 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:59.190031 1450261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0131 03:05:59.190044 1450261 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0131 03:05:59.322399 1450261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:05:59.322431 1450261 main.go:141] libmachine: Detecting the provisioner...
	I0131 03:05:59.322442 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:05:59.325945 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.326610 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.326643 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.326854 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHPort
	I0131 03:05:59.327138 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.327312 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.327465 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHUsername
	I0131 03:05:59.327658 1450261 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:59.328046 1450261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0131 03:05:59.328068 1450261 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0131 03:05:59.455474 1450261 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0131 03:05:59.455545 1450261 main.go:141] libmachine: found compatible host: buildroot
	I0131 03:05:59.455566 1450261 main.go:141] libmachine: Provisioning with buildroot...
	I0131 03:05:59.455582 1450261 main.go:141] libmachine: (auto-390748) Calling .GetMachineName
	I0131 03:05:59.455895 1450261 buildroot.go:166] provisioning hostname "auto-390748"
	I0131 03:05:59.455925 1450261 main.go:141] libmachine: (auto-390748) Calling .GetMachineName
	I0131 03:05:59.456124 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:05:59.458653 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.458945 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.458974 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.459160 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHPort
	I0131 03:05:59.459359 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.459557 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.459838 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHUsername
	I0131 03:05:59.460012 1450261 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:59.460324 1450261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0131 03:05:59.460337 1450261 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-390748 && echo "auto-390748" | sudo tee /etc/hostname
	I0131 03:05:59.604517 1450261 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-390748
	
	I0131 03:05:59.604552 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:05:59.608084 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.608464 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.608518 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.608769 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHPort
	I0131 03:05:59.608986 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.609177 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:05:59.609354 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHUsername
	I0131 03:05:59.609572 1450261 main.go:141] libmachine: Using SSH client type: native
	I0131 03:05:59.610039 1450261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0131 03:05:59.610068 1450261 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-390748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-390748/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-390748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:05:59.748435 1450261 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:05:59.748469 1450261 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:05:59.748528 1450261 buildroot.go:174] setting up certificates
	I0131 03:05:59.748547 1450261 provision.go:83] configureAuth start
	I0131 03:05:59.748567 1450261 main.go:141] libmachine: (auto-390748) Calling .GetMachineName
	I0131 03:05:59.748883 1450261 main.go:141] libmachine: (auto-390748) Calling .GetIP
	I0131 03:05:59.752091 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.752548 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.752583 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.752753 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:05:59.755164 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.755590 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:05:59.755612 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:05:59.755789 1450261 provision.go:138] copyHostCerts
	I0131 03:05:59.755870 1450261 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:05:59.755886 1450261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:05:59.755966 1450261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:05:59.756087 1450261 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:05:59.756101 1450261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:05:59.756135 1450261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:05:59.756220 1450261 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:05:59.756232 1450261 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:05:59.756262 1450261 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:05:59.756324 1450261 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.auto-390748 san=[192.168.50.117 192.168.50.117 localhost 127.0.0.1 minikube auto-390748]
	I0131 03:06:00.189116 1450261 provision.go:172] copyRemoteCerts
	I0131 03:06:00.189190 1450261 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:06:00.189218 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:06:00.191947 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:06:00.192403 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:06:00.192442 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:06:00.192641 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHPort
	I0131 03:06:00.192908 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:06:00.193094 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHUsername
	I0131 03:06:00.193264 1450261 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/auto-390748/id_rsa Username:docker}
	I0131 03:06:00.287189 1450261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:06:00.314202 1450261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0131 03:06:00.339407 1450261 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:06:00.366540 1450261 provision.go:86] duration metric: configureAuth took 617.974055ms
	I0131 03:06:00.366590 1450261 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:06:00.366792 1450261 config.go:182] Loaded profile config "auto-390748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:06:00.366896 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHHostname
	I0131 03:06:00.369881 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:06:00.370269 1450261 main.go:141] libmachine: (auto-390748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:9b:9f", ip: ""} in network mk-auto-390748: {Iface:virbr2 ExpiryTime:2024-01-31 04:05:51 +0000 UTC Type:0 Mac:52:54:00:ef:9b:9f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:auto-390748 Clientid:01:52:54:00:ef:9b:9f}
	I0131 03:06:00.370299 1450261 main.go:141] libmachine: (auto-390748) DBG | domain auto-390748 has defined IP address 192.168.50.117 and MAC address 52:54:00:ef:9b:9f in network mk-auto-390748
	I0131 03:06:00.370454 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHPort
	I0131 03:06:00.370690 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:06:00.370870 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHKeyPath
	I0131 03:06:00.371039 1450261 main.go:141] libmachine: (auto-390748) Calling .GetSSHUsername
	I0131 03:06:00.371263 1450261 main.go:141] libmachine: Using SSH client type: native
	I0131 03:06:00.371734 1450261 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0131 03:06:00.371767 1450261 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:06:01.020511 1450691 start.go:369] acquired machines lock for "kindnet-390748" in 9.963191626s
	I0131 03:06:01.020587 1450691 start.go:93] Provisioning new machine with config: &{Name:kindnet-390748 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:kindnet-390748 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:06:01.020719 1450691 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:03:48 UTC, ends at Wed 2024-01-31 03:06:01 UTC. --
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.803346234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670361803330959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ffc02fda-c758-4b3e-8566-0c38dbef2b46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.804053482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ae030562-a285-40b6-b6bc-860ea0cbaf31 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.804116553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ae030562-a285-40b6-b6bc-860ea0cbaf31 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.804395092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ae030562-a285-40b6-b6bc-860ea0cbaf31 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.856038658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fd17cfb4-116b-4cd8-ae97-0e96eb9e2ec8 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.856095890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fd17cfb4-116b-4cd8-ae97-0e96eb9e2ec8 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.857725269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=13255ee4-9e3e-4fcd-944b-9ad67616c1b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.858077764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670361858064626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=13255ee4-9e3e-4fcd-944b-9ad67616c1b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.859721779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d8042a2-1e80-42da-88c4-07eaecdc40c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.859790481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d8042a2-1e80-42da-88c4-07eaecdc40c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.860031333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d8042a2-1e80-42da-88c4-07eaecdc40c8 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.914070876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=72934d1d-c4f7-44e5-84c1-5ccc062ab9d9 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.914251560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=72934d1d-c4f7-44e5-84c1-5ccc062ab9d9 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.915961218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ff98c7af-06e6-4cf3-9520-fed1140c0b44 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.916393370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670361916378548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=ff98c7af-06e6-4cf3-9520-fed1140c0b44 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.917247328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd41aa4c-610a-4eaf-9160-f4bb0f81aa43 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.917314013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd41aa4c-610a-4eaf-9160-f4bb0f81aa43 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.917546613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd41aa4c-610a-4eaf-9160-f4bb0f81aa43 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.962924565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cc5fd7fd-1050-4d2e-997d-4c89fa100414 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.962985503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cc5fd7fd-1050-4d2e-997d-4c89fa100414 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.964364667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fbaa9d4a-3af3-4e54-823a-84180f3b6d95 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.964845739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706670361964829513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=fbaa9d4a-3af3-4e54-823a-84180f3b6d95 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.965342697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed5d8a2d-299a-4577-a805-609dcdfaddae name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.965385945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed5d8a2d-299a-4577-a805-609dcdfaddae name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:06:01 pause-218490 crio[2113]: time="2024-01-31 03:06:01.965676122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735,PodSandboxId:b16d323746f60bd44b13db5e4e9297cfab44f69f96790c70abf387c4998e770e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706670341812093054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f,PodSandboxId:24cca48a7ef5aa8634956d49a952e61c225cb3cb6f3f1e16eb65e730d2148b0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706670340559758178,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a,PodSandboxId:3bf3e9e4a8fa3cde407de55ce1d06cc050abe45c9f8e92f521f7d4724b400fb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706670339284922712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681
347ce1546f8e16f3648243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5,PodSandboxId:49ca164101587a9cecb84e0ad205bbc71f2bc72224d62ef4b9be9e3c2e4f9447,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706670339015746559,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 9ff0db61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30,PodSandboxId:fb8a3b325d407705c1d151e3641032ebbd1db29c0af5c0d7257b0929dba228f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706670338592104308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.
container.hash: d1fd4562,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333,PodSandboxId:5561a2faf9f907e265254f65bc10ebf2380c46eeffe82ea957ca1c444ac73d9e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706670338530881529,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e,PodSandboxId:6c2132152ae0552e4fdccaab1b243521aea681798758a216812495787983c2a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1706670280993676559,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvz49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568ad034-cb61-44ba-9fd9-892cfd5b9fc6,},Annotations:map[string]string{io.kubernetes.container.hash: c250bf41,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3,PodSandboxId:259fa81edfc6180f63d83fc36eb8718800b7030890715268b71757c4b4d8432f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1706670281053588852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j6htm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f935de64-0599-41ac-9a8f-fa1b1fd507a2,},Annotations:map[string]string{io.kubernetes.container.hash: 48aa87c8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6,PodSandboxId:e85b9c94b06e95b7c886fa8109948a71504a779299fdafb620baef2682b9af42,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1706670259372589335,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d5e681347ce1546f8e16f3648
243b28,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690,PodSandboxId:4b6bd2fdd95b6df776b686da76d2bfd9c7dc4f85870c6e385f582db8a8eb627e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1706670259113646224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6273bd781ddb25868058b06d2dc5b10,},Annotations:map[string]string{io.kubernetes.container.hash: 9ff0db61,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d,PodSandboxId:c66f125826c13ec9cecace45286f0595424a296765eb918492dbf80dddbbe7ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1706670258984862106,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16733bb4cb57d3074ea85378bd0c43b7,},Annotations:map[string]string{io.kubernetes.container.hash: d1fd4562,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5,PodSandboxId:97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1706670258892586625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-218490,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a5ac2f6510a9d901d1e134669c82d9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed5d8a2d-299a-4577-a805-609dcdfaddae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	67cbe72757ad0       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   20 seconds ago       Running             kube-proxy                1                   b16d323746f60       kube-proxy-rvz49
	1087598c8dc64       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago       Running             coredns                   1                   24cca48a7ef5a       coredns-5dd5756b68-j6htm
	ae09eb69a10d5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   22 seconds ago       Running             kube-scheduler            1                   3bf3e9e4a8fa3       kube-scheduler-pause-218490
	93f30be8cfed2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   23 seconds ago       Running             etcd                      1                   49ca164101587       etcd-pause-218490
	31b1d5b911ed3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   23 seconds ago       Running             kube-apiserver            1                   fb8a3b325d407       kube-apiserver-pause-218490
	69db172d16380       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   23 seconds ago       Running             kube-controller-manager   1                   5561a2faf9f90       kube-controller-manager-pause-218490
	a5bb008b08bd6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   259fa81edfc61       coredns-5dd5756b68-j6htm
	b94b4b01351b2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   About a minute ago   Exited              kube-proxy                0                   6c2132152ae05       kube-proxy-rvz49
	f9309037c8bf5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   About a minute ago   Exited              kube-scheduler            0                   e85b9c94b06e9       kube-scheduler-pause-218490
	5d659716881e1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   4b6bd2fdd95b6       etcd-pause-218490
	dbf09a5b997ad       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   About a minute ago   Exited              kube-apiserver            0                   c66f125826c13       kube-apiserver-pause-218490
	f17b87188e305       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   About a minute ago   Exited              kube-controller-manager   0                   97dd4a0bb5486       kube-controller-manager-pause-218490
	
	
	==> coredns [1087598c8dc64a4e5fc360b66a9add03838131ac6006a6fec7d4b7c790057f7f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51145 - 44375 "HINFO IN 3563477475614635631.1897945761809634163. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011492199s
	
	
	==> coredns [a5bb008b08bd622578134b9fdfa85731cafb09b05e0e311e5b530111d5efdde3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-218490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-218490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=pause-218490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_04_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:04:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-218490
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:04:47 +0000   Wed, 31 Jan 2024 03:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.138
	  Hostname:    pause-218490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb5e2773c7964632a7bc7d6497b9b7d6
	  System UUID:                cb5e2773-c796-4632-a7bc-7d6497b9b7d6
	  Boot ID:                    46375ca2-f694-4c86-95a4-f9a660800530
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j6htm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     83s
	  kube-system                 etcd-pause-218490                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kube-apiserver-pause-218490             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-pause-218490    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-rvz49                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-pause-218490             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node pause-218490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node pause-218490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node pause-218490 status is now: NodeHasSufficientPID
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node pause-218490 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node pause-218490 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node pause-218490 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                95s                  kubelet          Node pause-218490 status is now: NodeReady
	  Normal  RegisteredNode           84s                  node-controller  Node pause-218490 event: Registered Node pause-218490 in Controller
	  Normal  RegisteredNode           5s                   node-controller  Node pause-218490 event: Registered Node pause-218490 in Controller
	
	
	==> dmesg <==
	[Jan31 03:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064390] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.289009] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.548793] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136683] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.992343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan31 03:04] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.103704] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.165000] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.126937] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.206128] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +10.175917] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +8.837966] systemd-fstab-generator[1262]: Ignoring "noauto" for root device
	[Jan31 03:05] systemd-fstab-generator[2038]: Ignoring "noauto" for root device
	[  +0.152742] systemd-fstab-generator[2049]: Ignoring "noauto" for root device
	[  +0.182263] systemd-fstab-generator[2062]: Ignoring "noauto" for root device
	[  +0.161014] systemd-fstab-generator[2073]: Ignoring "noauto" for root device
	[  +0.290003] systemd-fstab-generator[2097]: Ignoring "noauto" for root device
	[  +2.263264] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [5d659716881e11afeae38139e67b4469fd4f6ba8efc65832982a0b14664cb690] <==
	{"level":"info","ts":"2024-01-31T03:04:21.407556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:04:21.407594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fdd267ffc1b7c75a elected leader fdd267ffc1b7c75a at term 2"}
	{"level":"info","ts":"2024-01-31T03:04:21.409199Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.41038Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fdd267ffc1b7c75a","local-member-attributes":"{Name:pause-218490 ClientURLs:[https://192.168.39.138:2379]}","request-path":"/0/members/fdd267ffc1b7c75a/attributes","cluster-id":"63b27a6ce7f4c58a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:04:21.410496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:04:21.411075Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63b27a6ce7f4c58a","local-member-id":"fdd267ffc1b7c75a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.411222Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.411272Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:04:21.411327Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:04:21.412259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.138:2379"}
	{"level":"info","ts":"2024-01-31T03:04:21.412932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:04:21.415433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:04:21.415503Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	WARNING: 2024/01/31 03:04:26 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-01-31T03:04:44.105373Z","caller":"traceutil/trace.go:171","msg":"trace[202895530] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"240.871782ms","start":"2024-01-31T03:04:43.864466Z","end":"2024-01-31T03:04:44.105338Z","steps":["trace[202895530] 'process raft request'  (duration: 240.332855ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-31T03:05:27.92534Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-31T03:05:27.925505Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-218490","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"]}
	{"level":"warn","ts":"2024-01-31T03:05:27.925633Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-31T03:05:27.925866Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-31T03:05:28.015453Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.138:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-31T03:05:28.015567Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.138:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-31T03:05:28.0173Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fdd267ffc1b7c75a","current-leader-member-id":"fdd267ffc1b7c75a"}
	{"level":"info","ts":"2024-01-31T03:05:28.020491Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:28.020676Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:28.020719Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-218490","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"]}
	
	
	==> etcd [93f30be8cfed28e4c3fb6af7f2115456b829fb32be39ac7d9987d21f216a99e5] <==
	{"level":"info","ts":"2024-01-31T03:05:41.46367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T03:05:41.463677Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T03:05:41.463945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a switched to configuration voters=(18289795384869373786)"}
	{"level":"info","ts":"2024-01-31T03:05:41.464047Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"63b27a6ce7f4c58a","local-member-id":"fdd267ffc1b7c75a","added-peer-id":"fdd267ffc1b7c75a","added-peer-peer-urls":["https://192.168.39.138:2380"]}
	{"level":"info","ts":"2024-01-31T03:05:41.464257Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"63b27a6ce7f4c58a","local-member-id":"fdd267ffc1b7c75a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:05:41.46437Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:05:41.46631Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-31T03:05:41.466553Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fdd267ffc1b7c75a","initial-advertise-peer-urls":["https://192.168.39.138:2380"],"listen-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T03:05:41.466584Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T03:05:41.466637Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:41.466642Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2024-01-31T03:05:43.343368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-31T03:05:43.343491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:05:43.343548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a received MsgPreVoteResp from fdd267ffc1b7c75a at term 2"}
	{"level":"info","ts":"2024-01-31T03:05:43.343583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became candidate at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.343607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a received MsgVoteResp from fdd267ffc1b7c75a at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.343634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became leader at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.343676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fdd267ffc1b7c75a elected leader fdd267ffc1b7c75a at term 3"}
	{"level":"info","ts":"2024-01-31T03:05:43.345383Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"fdd267ffc1b7c75a","local-member-attributes":"{Name:pause-218490 ClientURLs:[https://192.168.39.138:2379]}","request-path":"/0/members/fdd267ffc1b7c75a/attributes","cluster-id":"63b27a6ce7f4c58a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:05:43.34543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:05:43.345721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:05:43.347027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:05:43.347233Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:05:43.347272Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:05:43.347386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.138:2379"}
	
	
	==> kernel <==
	 03:06:02 up 2 min,  0 users,  load average: 0.54, 0.25, 0.09
	Linux pause-218490 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [31b1d5b911ed30d3e93f3f008c75bfeb206773fc7e2820283bcf449adb1d6f30] <==
	I0131 03:05:45.004598       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0131 03:05:45.063960       1 controller.go:134] Starting OpenAPI controller
	I0131 03:05:45.064298       1 controller.go:85] Starting OpenAPI V3 controller
	I0131 03:05:45.064601       1 naming_controller.go:291] Starting NamingConditionController
	I0131 03:05:45.064636       1 establishing_controller.go:76] Starting EstablishingController
	I0131 03:05:45.064854       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0131 03:05:45.065068       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0131 03:05:45.065092       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0131 03:05:45.207497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0131 03:05:45.208712       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0131 03:05:45.208812       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0131 03:05:45.218411       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0131 03:05:45.218462       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0131 03:05:45.218492       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0131 03:05:45.218550       1 shared_informer.go:318] Caches are synced for configmaps
	I0131 03:05:45.223298       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0131 03:05:45.234313       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0131 03:05:45.234469       1 aggregator.go:166] initial CRD sync complete...
	I0131 03:05:45.234520       1 autoregister_controller.go:141] Starting autoregister controller
	I0131 03:05:45.234550       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0131 03:05:45.234579       1 cache.go:39] Caches are synced for autoregister controller
	E0131 03:05:45.275238       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0131 03:05:46.011987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0131 03:05:57.454947       1 controller.go:624] quota admission added evaluator for: endpoints
	I0131 03:05:57.487082       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [dbf09a5b997ad86fec7d4a41c766139b3777d870aa74df923a6b96eb34dca04d] <==
	I0131 03:04:24.926370       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0131 03:04:24.977264       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0131 03:04:25.123515       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0131 03:04:25.140795       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.138]
	I0131 03:04:25.142308       1 controller.go:624] quota admission added evaluator for: endpoints
	I0131 03:04:25.149884       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0131 03:04:25.182417       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	E0131 03:04:26.571877       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0131 03:04:26.571939       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0131 03:04:26.571963       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.01µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0131 03:04:26.573280       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0131 03:04:26.573387       1 timeout.go:142] post-timeout activity - time-elapsed: 1.55604ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	I0131 03:04:26.643572       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0131 03:04:26.669144       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0131 03:04:26.687870       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0131 03:04:38.281052       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0131 03:04:38.836322       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0131 03:05:27.932402       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0131 03:05:27.932983       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933032       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933060       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933094       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933606       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933726       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0131 03:05:27.933914       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [69db172d1638023d0799898602e0500d0d43900079da4aba994d92b53a142333] <==
	I0131 03:05:57.494616       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0131 03:05:57.496067       1 shared_informer.go:318] Caches are synced for job
	I0131 03:05:57.496224       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0131 03:05:57.498364       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0131 03:05:57.498476       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0131 03:05:57.503315       1 shared_informer.go:318] Caches are synced for ephemeral
	I0131 03:05:57.504481       1 shared_informer.go:318] Caches are synced for PVC protection
	I0131 03:05:57.504588       1 shared_informer.go:318] Caches are synced for service account
	I0131 03:05:57.507831       1 shared_informer.go:318] Caches are synced for daemon sets
	I0131 03:05:57.507943       1 shared_informer.go:318] Caches are synced for PV protection
	I0131 03:05:57.517216       1 shared_informer.go:318] Caches are synced for HPA
	I0131 03:05:57.520521       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0131 03:05:57.523929       1 shared_informer.go:318] Caches are synced for GC
	I0131 03:05:57.528247       1 shared_informer.go:318] Caches are synced for TTL
	I0131 03:05:57.531573       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0131 03:05:57.531781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.23µs"
	I0131 03:05:57.555941       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0131 03:05:57.581559       1 shared_informer.go:318] Caches are synced for deployment
	I0131 03:05:57.628772       1 shared_informer.go:318] Caches are synced for cronjob
	I0131 03:05:57.638786       1 shared_informer.go:318] Caches are synced for disruption
	I0131 03:05:57.691664       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:05:57.694720       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:05:58.037058       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:05:58.056296       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:05:58.056388       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [f17b87188e305afead246e60a0bad45ac2ae02c3d3aaad3501942d08620e3fe5] <==
	I0131 03:04:38.151023       1 shared_informer.go:318] Caches are synced for service account
	I0131 03:04:38.154600       1 range_allocator.go:380] "Set node PodCIDR" node="pause-218490" podCIDRs=["10.244.0.0/24"]
	I0131 03:04:38.163006       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:04:38.176996       1 shared_informer.go:318] Caches are synced for crt configmap
	I0131 03:04:38.202423       1 shared_informer.go:318] Caches are synced for resource quota
	I0131 03:04:38.230256       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0131 03:04:38.288731       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0131 03:04:38.581678       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:04:38.632519       1 shared_informer.go:318] Caches are synced for garbage collector
	I0131 03:04:38.632647       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0131 03:04:38.867279       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rvz49"
	I0131 03:04:39.060510       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0131 03:04:39.089707       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-h9xfp"
	I0131 03:04:39.140631       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-j6htm"
	I0131 03:04:39.232137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="942.50645ms"
	I0131 03:04:39.270277       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-h9xfp"
	I0131 03:04:39.365075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="132.496231ms"
	I0131 03:04:39.433132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.440956ms"
	I0131 03:04:39.436917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="3.622248ms"
	I0131 03:04:40.953797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.804µs"
	I0131 03:04:40.977005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.234µs"
	I0131 03:04:40.982248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="225.318µs"
	I0131 03:04:41.966265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="2.157451ms"
	I0131 03:04:42.017802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.65637ms"
	I0131 03:04:42.023716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="369.933µs"
	
	
	==> kube-proxy [67cbe72757ad03ae0a74115ecb75000109e39dee0b9a4ead7d68075c1ab7d735] <==
	I0131 03:05:42.087850       1 server_others.go:69] "Using iptables proxy"
	I0131 03:05:45.202386       1 node.go:141] Successfully retrieved node IP: 192.168.39.138
	I0131 03:05:45.507248       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:05:45.507768       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:05:45.528566       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:05:45.528667       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:05:45.528924       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:05:45.528937       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:05:45.534244       1 config.go:188] "Starting service config controller"
	I0131 03:05:45.534375       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:05:45.534502       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:05:45.535089       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:05:45.539697       1 config.go:315] "Starting node config controller"
	I0131 03:05:45.540768       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:05:45.635486       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:05:45.635627       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:05:45.642036       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [b94b4b01351b28ac8a8626bff5f9db12828f6173e15e10b0289d1a52e9f92e4e] <==
	I0131 03:04:41.442689       1 server_others.go:69] "Using iptables proxy"
	I0131 03:04:41.464237       1 node.go:141] Successfully retrieved node IP: 192.168.39.138
	I0131 03:04:41.513790       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:04:41.513859       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:04:41.516773       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:04:41.517647       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:04:41.518340       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:04:41.518387       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:04:41.521666       1 config.go:188] "Starting service config controller"
	I0131 03:04:41.522290       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:04:41.522369       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:04:41.522379       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:04:41.526988       1 config.go:315] "Starting node config controller"
	I0131 03:04:41.527110       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:04:41.623430       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:04:41.623551       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:04:41.628313       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ae09eb69a10d5b1204f8345894b2d1bd469056f63993a976bb5a9b8d7120ec9a] <==
	I0131 03:05:42.158260       1 serving.go:348] Generated self-signed cert in-memory
	W0131 03:05:45.180864       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0131 03:05:45.180996       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 03:05:45.181014       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0131 03:05:45.181109       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0131 03:05:45.229267       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0131 03:05:45.229363       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:05:45.238479       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0131 03:05:45.239728       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0131 03:05:45.239827       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 03:05:45.239884       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0131 03:05:45.340241       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f9309037c8bf5db9edb26506279b1381c534c5131d3dd75f9e3bcf62fc80d6f6] <==
	W0131 03:04:24.124307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:04:24.124406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:04:24.388651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:04:24.388743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0131 03:04:24.412359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:04:24.412447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 03:04:24.435597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:04:24.435687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 03:04:24.457003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:04:24.457091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0131 03:04:24.505961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 03:04:24.506125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 03:04:24.571295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:04:24.571356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:04:24.603543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:04:24.603604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:04:24.615991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 03:04:24.616052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0131 03:04:24.729975       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:04:24.730108       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0131 03:04:27.747250       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0131 03:05:27.927268       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0131 03:05:27.927592       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0131 03:05:27.927780       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0131 03:05:27.933835       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:03:48 UTC, ends at Wed 2024-01-31 03:06:02 UTC. --
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.176847    1269 status_manager.go:853] "Failed to get status for pod" podUID="e6273bd781ddb25868058b06d2dc5b10" pod="kube-system/etcd-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.196658    1269 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97dd4a0bb5486555f175f1e7c66e6755c7530c52f145500142dc17dc382f4906"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.197512    1269 status_manager.go:853] "Failed to get status for pod" podUID="16733bb4cb57d3074ea85378bd0c43b7" pod="kube-system/kube-apiserver-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.197750    1269 status_manager.go:853] "Failed to get status for pod" podUID="47a5ac2f6510a9d901d1e134669c82d9" pod="kube-system/kube-controller-manager-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.197965    1269 status_manager.go:853] "Failed to get status for pod" podUID="2d5e681347ce1546f8e16f3648243b28" pod="kube-system/kube-scheduler-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.198321    1269 status_manager.go:853] "Failed to get status for pod" podUID="568ad034-cb61-44ba-9fd9-892cfd5b9fc6" pod="kube-system/kube-proxy-rvz49" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rvz49\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.198599    1269 status_manager.go:853] "Failed to get status for pod" podUID="f935de64-0599-41ac-9a8f-fa1b1fd507a2" pod="kube-system/coredns-5dd5756b68-j6htm" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-j6htm\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:37 pause-218490 kubelet[1269]: I0131 03:05:37.198895    1269 status_manager.go:853] "Failed to get status for pod" podUID="e6273bd781ddb25868058b06d2dc5b10" pod="kube-system/etcd-pause-218490" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-218490\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.285857    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286114    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286459    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286637    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.286884    1269 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: I0131 03:05:38.286919    1269 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.287076    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="200ms"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.488009    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="400ms"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532085    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532414    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532600    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532738    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532893    1269 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-218490\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.532904    1269 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Jan 31 03:05:38 pause-218490 kubelet[1269]: E0131 03:05:38.889860    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="800ms"
	Jan 31 03:05:39 pause-218490 kubelet[1269]: E0131 03:05:39.691510    1269 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-218490?timeout=10s\": dial tcp 192.168.39.138:8443: connect: connection refused" interval="1.6s"
	Jan 31 03:05:45 pause-218490 kubelet[1269]: E0131 03:05:45.139694    1269 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-218490 -n pause-218490
helpers_test.go:261: (dbg) Run:  kubectl --context pause-218490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (79.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-625812 --alsologtostderr -v=3
E0131 03:11:44.091285 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-625812 --alsologtostderr -v=3: exit status 82 (2m0.571792006s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-625812"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 03:11:43.901004 1463689 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:11:43.901159 1463689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:11:43.901169 1463689 out.go:309] Setting ErrFile to fd 2...
	I0131 03:11:43.901174 1463689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:11:43.901383 1463689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:11:43.901636 1463689 out.go:303] Setting JSON to false
	I0131 03:11:43.901726 1463689 mustload.go:65] Loading cluster: no-preload-625812
	I0131 03:11:43.902079 1463689 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:11:43.902155 1463689 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:11:43.902321 1463689 mustload.go:65] Loading cluster: no-preload-625812
	I0131 03:11:43.902444 1463689 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:11:43.902496 1463689 stop.go:39] StopHost: no-preload-625812
	I0131 03:11:43.902909 1463689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:11:43.902977 1463689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:11:43.918881 1463689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44449
	I0131 03:11:43.919484 1463689 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:11:43.920240 1463689 main.go:141] libmachine: Using API Version  1
	I0131 03:11:43.920271 1463689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:11:43.920657 1463689 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:11:43.922725 1463689 out.go:177] * Stopping node "no-preload-625812"  ...
	I0131 03:11:43.924359 1463689 main.go:141] libmachine: Stopping "no-preload-625812"...
	I0131 03:11:43.924376 1463689 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:11:43.926284 1463689 main.go:141] libmachine: (no-preload-625812) Calling .Stop
	I0131 03:11:43.929911 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 0/120
	I0131 03:11:44.931571 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 1/120
	I0131 03:11:45.934106 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 2/120
	I0131 03:11:46.936653 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 3/120
	I0131 03:11:47.938947 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 4/120
	I0131 03:11:48.940402 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 5/120
	I0131 03:11:49.943334 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 6/120
	I0131 03:11:50.945259 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 7/120
	I0131 03:11:51.947432 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 8/120
	I0131 03:11:52.949437 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 9/120
	I0131 03:11:53.951655 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 10/120
	I0131 03:11:54.953964 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 11/120
	I0131 03:11:55.955535 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 12/120
	I0131 03:11:56.957285 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 13/120
	I0131 03:11:57.959959 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 14/120
	I0131 03:11:58.962215 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 15/120
	I0131 03:11:59.963816 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 16/120
	I0131 03:12:00.965425 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 17/120
	I0131 03:12:01.967230 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 18/120
	I0131 03:12:02.969669 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 19/120
	I0131 03:12:03.971984 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 20/120
	I0131 03:12:04.973409 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 21/120
	I0131 03:12:05.974761 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 22/120
	I0131 03:12:06.977332 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 23/120
	I0131 03:12:07.978965 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 24/120
	I0131 03:12:08.981489 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 25/120
	I0131 03:12:09.982895 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 26/120
	I0131 03:12:10.984293 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 27/120
	I0131 03:12:11.985830 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 28/120
	I0131 03:12:12.987963 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 29/120
	I0131 03:12:13.989657 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 30/120
	I0131 03:12:14.990945 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 31/120
	I0131 03:12:15.993097 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 32/120
	I0131 03:12:16.994501 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 33/120
	I0131 03:12:17.997082 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 34/120
	I0131 03:12:18.999544 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 35/120
	I0131 03:12:20.001441 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 36/120
	I0131 03:12:21.003269 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 37/120
	I0131 03:12:22.005037 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 38/120
	I0131 03:12:23.007085 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 39/120
	I0131 03:12:24.009243 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 40/120
	I0131 03:12:25.010915 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 41/120
	I0131 03:12:26.012464 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 42/120
	I0131 03:12:27.013898 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 43/120
	I0131 03:12:28.015544 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 44/120
	I0131 03:12:29.017749 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 45/120
	I0131 03:12:30.019297 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 46/120
	I0131 03:12:31.020825 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 47/120
	I0131 03:12:32.022254 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 48/120
	I0131 03:12:33.023799 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 49/120
	I0131 03:12:34.026315 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 50/120
	I0131 03:12:35.028005 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 51/120
	I0131 03:12:36.029892 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 52/120
	I0131 03:12:37.031407 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 53/120
	I0131 03:12:38.033136 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 54/120
	I0131 03:12:39.035479 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 55/120
	I0131 03:12:40.036973 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 56/120
	I0131 03:12:41.038428 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 57/120
	I0131 03:12:42.040161 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 58/120
	I0131 03:12:43.041481 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 59/120
	I0131 03:12:44.043710 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 60/120
	I0131 03:12:45.044911 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 61/120
	I0131 03:12:46.046521 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 62/120
	I0131 03:12:47.048293 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 63/120
	I0131 03:12:48.050082 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 64/120
	I0131 03:12:49.051844 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 65/120
	I0131 03:12:50.053366 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 66/120
	I0131 03:12:51.054771 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 67/120
	I0131 03:12:52.056475 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 68/120
	I0131 03:12:53.058782 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 69/120
	I0131 03:12:54.061099 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 70/120
	I0131 03:12:55.062441 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 71/120
	I0131 03:12:56.063944 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 72/120
	I0131 03:12:57.065267 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 73/120
	I0131 03:12:58.066870 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 74/120
	I0131 03:12:59.069075 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 75/120
	I0131 03:13:00.070579 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 76/120
	I0131 03:13:01.072071 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 77/120
	I0131 03:13:02.073512 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 78/120
	I0131 03:13:03.074868 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 79/120
	I0131 03:13:04.076146 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 80/120
	I0131 03:13:05.077566 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 81/120
	I0131 03:13:06.078971 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 82/120
	I0131 03:13:07.080523 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 83/120
	I0131 03:13:08.082394 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 84/120
	I0131 03:13:09.084576 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 85/120
	I0131 03:13:10.085888 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 86/120
	I0131 03:13:11.087759 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 87/120
	I0131 03:13:12.089431 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 88/120
	I0131 03:13:13.091163 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 89/120
	I0131 03:13:14.093424 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 90/120
	I0131 03:13:15.095149 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 91/120
	I0131 03:13:16.293257 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 92/120
	I0131 03:13:17.295180 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 93/120
	I0131 03:13:18.297684 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 94/120
	I0131 03:13:19.300303 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 95/120
	I0131 03:13:20.302575 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 96/120
	I0131 03:13:21.304162 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 97/120
	I0131 03:13:22.305645 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 98/120
	I0131 03:13:23.307403 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 99/120
	I0131 03:13:24.309785 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 100/120
	I0131 03:13:25.311972 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 101/120
	I0131 03:13:26.313524 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 102/120
	I0131 03:13:27.315730 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 103/120
	I0131 03:13:28.317395 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 104/120
	I0131 03:13:29.319314 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 105/120
	I0131 03:13:30.321418 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 106/120
	I0131 03:13:31.323228 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 107/120
	I0131 03:13:32.324622 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 108/120
	I0131 03:13:33.326474 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 109/120
	I0131 03:13:34.328587 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 110/120
	I0131 03:13:35.330048 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 111/120
	I0131 03:13:36.331457 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 112/120
	I0131 03:13:37.332887 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 113/120
	I0131 03:13:38.334782 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 114/120
	I0131 03:13:39.336638 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 115/120
	I0131 03:13:40.338399 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 116/120
	I0131 03:13:41.340055 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 117/120
	I0131 03:13:42.341736 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 118/120
	I0131 03:13:43.343279 1463689 main.go:141] libmachine: (no-preload-625812) Waiting for machine to stop 119/120
	I0131 03:13:44.344013 1463689 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0131 03:13:44.344088 1463689 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0131 03:13:44.345786 1463689 out.go:177] 
	W0131 03:13:44.347244 1463689 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0131 03:13:44.347276 1463689 out.go:239] * 
	* 
	W0131 03:13:44.406704 1463689 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 03:13:44.408553 1463689 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-625812 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812
E0131 03:13:49.126783 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:51.596549 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812: exit status 3 (18.53369597s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:02.942921 1465254 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host
	E0131 03:14:02.942943 1465254 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-625812" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (138.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-711547 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-711547 --alsologtostderr -v=3: exit status 82 (2m0.321102434s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-711547"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 03:11:57.273095 1463938 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:11:57.273256 1463938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:11:57.273269 1463938 out.go:309] Setting ErrFile to fd 2...
	I0131 03:11:57.273276 1463938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:11:57.273528 1463938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:11:57.273784 1463938 out.go:303] Setting JSON to false
	I0131 03:11:57.273865 1463938 mustload.go:65] Loading cluster: old-k8s-version-711547
	I0131 03:11:57.274242 1463938 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:11:57.274310 1463938 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:11:57.274441 1463938 mustload.go:65] Loading cluster: old-k8s-version-711547
	I0131 03:11:57.274581 1463938 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:11:57.274610 1463938 stop.go:39] StopHost: old-k8s-version-711547
	I0131 03:11:57.275030 1463938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:11:57.275097 1463938 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:11:57.292538 1463938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0131 03:11:57.293080 1463938 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:11:57.293835 1463938 main.go:141] libmachine: Using API Version  1
	I0131 03:11:57.293864 1463938 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:11:57.294355 1463938 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:11:57.297652 1463938 out.go:177] * Stopping node "old-k8s-version-711547"  ...
	I0131 03:11:57.299139 1463938 main.go:141] libmachine: Stopping "old-k8s-version-711547"...
	I0131 03:11:57.299190 1463938 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:11:57.301781 1463938 main.go:141] libmachine: (old-k8s-version-711547) Calling .Stop
	I0131 03:11:57.305971 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 0/120
	I0131 03:11:58.307552 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 1/120
	I0131 03:11:59.308884 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 2/120
	I0131 03:12:00.310998 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 3/120
	I0131 03:12:01.313326 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 4/120
	I0131 03:12:02.315530 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 5/120
	I0131 03:12:03.316964 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 6/120
	I0131 03:12:04.318434 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 7/120
	I0131 03:12:05.319974 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 8/120
	I0131 03:12:06.321535 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 9/120
	I0131 03:12:07.323359 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 10/120
	I0131 03:12:08.325014 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 11/120
	I0131 03:12:09.326509 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 12/120
	I0131 03:12:10.328012 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 13/120
	I0131 03:12:11.329553 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 14/120
	I0131 03:12:12.331897 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 15/120
	I0131 03:12:13.333716 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 16/120
	I0131 03:12:14.335175 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 17/120
	I0131 03:12:15.337662 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 18/120
	I0131 03:12:16.339507 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 19/120
	I0131 03:12:17.341040 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 20/120
	I0131 03:12:18.342701 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 21/120
	I0131 03:12:19.345094 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 22/120
	I0131 03:12:20.346732 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 23/120
	I0131 03:12:21.348304 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 24/120
	I0131 03:12:22.350754 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 25/120
	I0131 03:12:23.352410 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 26/120
	I0131 03:12:24.354664 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 27/120
	I0131 03:12:25.356148 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 28/120
	I0131 03:12:26.357710 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 29/120
	I0131 03:12:27.360004 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 30/120
	I0131 03:12:28.361507 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 31/120
	I0131 03:12:29.363006 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 32/120
	I0131 03:12:30.364381 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 33/120
	I0131 03:12:31.365890 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 34/120
	I0131 03:12:32.367993 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 35/120
	I0131 03:12:33.369429 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 36/120
	I0131 03:12:34.371099 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 37/120
	I0131 03:12:35.372455 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 38/120
	I0131 03:12:36.373941 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 39/120
	I0131 03:12:37.375883 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 40/120
	I0131 03:12:38.377188 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 41/120
	I0131 03:12:39.378765 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 42/120
	I0131 03:12:40.380273 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 43/120
	I0131 03:12:41.381777 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 44/120
	I0131 03:12:42.384140 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 45/120
	I0131 03:12:43.385434 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 46/120
	I0131 03:12:44.386896 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 47/120
	I0131 03:12:45.388832 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 48/120
	I0131 03:12:46.390449 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 49/120
	I0131 03:12:47.393006 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 50/120
	I0131 03:12:48.394734 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 51/120
	I0131 03:12:49.396319 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 52/120
	I0131 03:12:50.398425 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 53/120
	I0131 03:12:51.399819 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 54/120
	I0131 03:12:52.401800 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 55/120
	I0131 03:12:53.403190 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 56/120
	I0131 03:12:54.404774 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 57/120
	I0131 03:12:55.406357 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 58/120
	I0131 03:12:56.407796 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 59/120
	I0131 03:12:57.410082 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 60/120
	I0131 03:12:58.411482 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 61/120
	I0131 03:12:59.412818 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 62/120
	I0131 03:13:00.414237 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 63/120
	I0131 03:13:01.416227 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 64/120
	I0131 03:13:02.418615 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 65/120
	I0131 03:13:03.420096 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 66/120
	I0131 03:13:04.421741 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 67/120
	I0131 03:13:05.423301 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 68/120
	I0131 03:13:06.424777 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 69/120
	I0131 03:13:07.426294 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 70/120
	I0131 03:13:08.427850 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 71/120
	I0131 03:13:09.429429 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 72/120
	I0131 03:13:10.431022 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 73/120
	I0131 03:13:11.433163 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 74/120
	I0131 03:13:12.435320 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 75/120
	I0131 03:13:13.437262 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 76/120
	I0131 03:13:14.438758 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 77/120
	I0131 03:13:15.441160 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 78/120
	I0131 03:13:16.442983 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 79/120
	I0131 03:13:17.445167 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 80/120
	I0131 03:13:18.446853 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 81/120
	I0131 03:13:19.449212 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 82/120
	I0131 03:13:20.450881 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 83/120
	I0131 03:13:21.452529 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 84/120
	I0131 03:13:22.454683 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 85/120
	I0131 03:13:23.456055 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 86/120
	I0131 03:13:24.457598 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 87/120
	I0131 03:13:25.458820 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 88/120
	I0131 03:13:26.461173 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 89/120
	I0131 03:13:27.463696 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 90/120
	I0131 03:13:28.465246 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 91/120
	I0131 03:13:29.466693 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 92/120
	I0131 03:13:30.469190 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 93/120
	I0131 03:13:31.470831 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 94/120
	I0131 03:13:32.472348 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 95/120
	I0131 03:13:33.473730 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 96/120
	I0131 03:13:34.475299 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 97/120
	I0131 03:13:35.476668 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 98/120
	I0131 03:13:36.478037 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 99/120
	I0131 03:13:37.479860 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 100/120
	I0131 03:13:38.481443 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 101/120
	I0131 03:13:39.482719 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 102/120
	I0131 03:13:40.484422 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 103/120
	I0131 03:13:41.485867 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 104/120
	I0131 03:13:42.488185 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 105/120
	I0131 03:13:43.490333 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 106/120
	I0131 03:13:44.492309 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 107/120
	I0131 03:13:45.493751 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 108/120
	I0131 03:13:46.496442 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 109/120
	I0131 03:13:47.497985 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 110/120
	I0131 03:13:48.499463 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 111/120
	I0131 03:13:49.501461 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 112/120
	I0131 03:13:50.502783 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 113/120
	I0131 03:13:51.504985 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 114/120
	I0131 03:13:52.506584 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 115/120
	I0131 03:13:53.508143 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 116/120
	I0131 03:13:54.509499 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 117/120
	I0131 03:13:55.510900 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 118/120
	I0131 03:13:56.512350 1463938 main.go:141] libmachine: (old-k8s-version-711547) Waiting for machine to stop 119/120
	I0131 03:13:57.513188 1463938 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0131 03:13:57.513274 1463938 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0131 03:13:57.515262 1463938 out.go:177] 
	W0131 03:13:57.516846 1463938 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0131 03:13:57.516866 1463938 out.go:239] * 
	* 
	W0131 03:13:57.522733 1463938 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 03:13:57.524368 1463938 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-711547 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547
E0131 03:13:59.367932 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:14:00.516003 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:00.521333 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:00.531659 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:00.552060 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:00.592401 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:00.672926 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:00.833465 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:01.154219 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:01.794720 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547: exit status 3 (18.476411619s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:15.998869 1465322 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host
	E0131 03:14:15.998896 1465322 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-711547" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (138.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-873005 --alsologtostderr -v=3
E0131 03:12:12.248974 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.254297 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.264615 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.284987 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.325387 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.406270 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.566740 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:12.886911 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:13.527157 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:14.808112 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:17.368341 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-873005 --alsologtostderr -v=3: exit status 82 (2m0.816198588s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-873005"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 03:12:07.659924 1464037 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:12:07.660120 1464037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:12:07.660141 1464037 out.go:309] Setting ErrFile to fd 2...
	I0131 03:12:07.660150 1464037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:12:07.660487 1464037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:12:07.660877 1464037 out.go:303] Setting JSON to false
	I0131 03:12:07.660999 1464037 mustload.go:65] Loading cluster: default-k8s-diff-port-873005
	I0131 03:12:07.661521 1464037 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:12:07.661644 1464037 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:12:07.661906 1464037 mustload.go:65] Loading cluster: default-k8s-diff-port-873005
	I0131 03:12:07.662092 1464037 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:12:07.662150 1464037 stop.go:39] StopHost: default-k8s-diff-port-873005
	I0131 03:12:07.662808 1464037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:12:07.662886 1464037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:12:07.679329 1464037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:12:07.679848 1464037 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:12:07.680526 1464037 main.go:141] libmachine: Using API Version  1
	I0131 03:12:07.680555 1464037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:12:07.680861 1464037 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:12:07.683443 1464037 out.go:177] * Stopping node "default-k8s-diff-port-873005"  ...
	I0131 03:12:07.684818 1464037 main.go:141] libmachine: Stopping "default-k8s-diff-port-873005"...
	I0131 03:12:07.684838 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:12:07.686712 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Stop
	I0131 03:12:07.690429 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 0/120
	I0131 03:12:08.691837 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 1/120
	I0131 03:12:09.693430 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 2/120
	I0131 03:12:10.694961 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 3/120
	I0131 03:12:11.696505 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 4/120
	I0131 03:12:12.698533 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 5/120
	I0131 03:12:13.700167 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 6/120
	I0131 03:12:14.701609 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 7/120
	I0131 03:12:15.703383 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 8/120
	I0131 03:12:16.704832 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 9/120
	I0131 03:12:17.707491 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 10/120
	I0131 03:12:18.709713 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 11/120
	I0131 03:12:19.711313 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 12/120
	I0131 03:12:20.713223 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 13/120
	I0131 03:12:21.715185 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 14/120
	I0131 03:12:22.721678 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 15/120
	I0131 03:12:23.723236 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 16/120
	I0131 03:12:24.725305 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 17/120
	I0131 03:12:25.726946 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 18/120
	I0131 03:12:26.729159 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 19/120
	I0131 03:12:27.731835 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 20/120
	I0131 03:12:28.733212 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 21/120
	I0131 03:12:29.734734 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 22/120
	I0131 03:12:30.736095 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 23/120
	I0131 03:12:31.737787 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 24/120
	I0131 03:12:32.739915 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 25/120
	I0131 03:12:33.741806 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 26/120
	I0131 03:12:34.743384 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 27/120
	I0131 03:12:35.745368 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 28/120
	I0131 03:12:36.746781 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 29/120
	I0131 03:12:37.749177 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 30/120
	I0131 03:12:38.750651 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 31/120
	I0131 03:12:39.752125 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 32/120
	I0131 03:12:40.753592 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 33/120
	I0131 03:12:41.754895 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 34/120
	I0131 03:12:42.756957 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 35/120
	I0131 03:12:43.758953 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 36/120
	I0131 03:12:44.760569 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 37/120
	I0131 03:12:45.762103 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 38/120
	I0131 03:12:46.763986 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 39/120
	I0131 03:12:47.765822 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 40/120
	I0131 03:12:48.767240 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 41/120
	I0131 03:12:49.768614 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 42/120
	I0131 03:12:50.769926 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 43/120
	I0131 03:12:51.771623 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 44/120
	I0131 03:12:52.773727 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 45/120
	I0131 03:12:53.775119 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 46/120
	I0131 03:12:54.776722 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 47/120
	I0131 03:12:55.778130 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 48/120
	I0131 03:12:56.779735 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 49/120
	I0131 03:12:57.782128 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 50/120
	I0131 03:12:58.783484 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 51/120
	I0131 03:12:59.785068 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 52/120
	I0131 03:13:00.786642 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 53/120
	I0131 03:13:01.788140 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 54/120
	I0131 03:13:02.790332 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 55/120
	I0131 03:13:03.791762 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 56/120
	I0131 03:13:04.793255 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 57/120
	I0131 03:13:05.794660 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 58/120
	I0131 03:13:06.797118 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 59/120
	I0131 03:13:07.798652 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 60/120
	I0131 03:13:08.800182 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 61/120
	I0131 03:13:09.802017 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 62/120
	I0131 03:13:10.803831 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 63/120
	I0131 03:13:11.805670 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 64/120
	I0131 03:13:12.808106 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 65/120
	I0131 03:13:13.809915 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 66/120
	I0131 03:13:14.811639 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 67/120
	I0131 03:13:16.293102 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 68/120
	I0131 03:13:17.294957 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 69/120
	I0131 03:13:18.297524 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 70/120
	I0131 03:13:19.300207 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 71/120
	I0131 03:13:20.301642 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 72/120
	I0131 03:13:21.303893 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 73/120
	I0131 03:13:22.305441 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 74/120
	I0131 03:13:23.307688 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 75/120
	I0131 03:13:24.309589 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 76/120
	I0131 03:13:25.311309 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 77/120
	I0131 03:13:26.313243 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 78/120
	I0131 03:13:27.315077 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 79/120
	I0131 03:13:28.317401 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 80/120
	I0131 03:13:29.318972 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 81/120
	I0131 03:13:30.320367 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 82/120
	I0131 03:13:31.322398 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 83/120
	I0131 03:13:32.324622 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 84/120
	I0131 03:13:33.326524 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 85/120
	I0131 03:13:34.328094 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 86/120
	I0131 03:13:35.329697 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 87/120
	I0131 03:13:36.331240 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 88/120
	I0131 03:13:37.332778 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 89/120
	I0131 03:13:38.335108 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 90/120
	I0131 03:13:39.337054 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 91/120
	I0131 03:13:40.338625 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 92/120
	I0131 03:13:41.339947 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 93/120
	I0131 03:13:42.341561 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 94/120
	I0131 03:13:43.343631 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 95/120
	I0131 03:13:44.346075 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 96/120
	I0131 03:13:45.347826 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 97/120
	I0131 03:13:46.349838 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 98/120
	I0131 03:13:47.352423 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 99/120
	I0131 03:13:48.354896 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 100/120
	I0131 03:13:49.357527 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 101/120
	I0131 03:13:50.359172 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 102/120
	I0131 03:13:51.361200 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 103/120
	I0131 03:13:52.362771 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 104/120
	I0131 03:13:53.364929 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 105/120
	I0131 03:13:54.366325 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 106/120
	I0131 03:13:55.367845 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 107/120
	I0131 03:13:56.369411 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 108/120
	I0131 03:13:57.371034 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 109/120
	I0131 03:13:58.373321 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 110/120
	I0131 03:13:59.374940 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 111/120
	I0131 03:14:00.376475 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 112/120
	I0131 03:14:01.377905 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 113/120
	I0131 03:14:02.379376 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 114/120
	I0131 03:14:03.381148 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 115/120
	I0131 03:14:04.382801 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 116/120
	I0131 03:14:05.384069 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 117/120
	I0131 03:14:06.385496 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 118/120
	I0131 03:14:07.387247 1464037 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for machine to stop 119/120
	I0131 03:14:08.387915 1464037 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0131 03:14:08.387985 1464037 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0131 03:14:08.389883 1464037 out.go:177] 
	W0131 03:14:08.391320 1464037 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0131 03:14:08.391348 1464037 out.go:239] * 
	* 
	W0131 03:14:08.398464 1464037 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 03:14:08.400093 1464037 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-873005 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
E0131 03:14:10.756960 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005: exit status 3 (18.606266946s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:27.006817 1465425 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host
	E0131 03:14:27.006841 1465425 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-873005" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812
E0131 03:14:03.075137 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:05.636027 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812: exit status 3 (3.199483638s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:06.142885 1465363 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host
	E0131 03:14:06.142906 1465363 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-625812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-625812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155038056s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-625812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812: exit status 3 (3.060859065s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:15.358887 1465465 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host
	E0131 03:14:15.358916 1465465 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.23:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-625812" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547: exit status 3 (3.201670867s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:19.202835 1465567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host
	E0131 03:14:19.202858 1465567 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-711547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0131 03:14:19.848440 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-711547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151738782s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-711547 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547
E0131 03:14:25.376990 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547: exit status 3 (3.060675472s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:28.414981 1465656 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host
	E0131 03:14:28.415002 1465656 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.63:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-711547" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005: exit status 3 (3.200737691s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:30.206888 1465697 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host
	E0131 03:14:30.206916 1465697 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-873005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-873005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154423164s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-873005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005: exit status 3 (3.061122479s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:14:39.422921 1465858 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host
	E0131 03:14:39.422945 1465858 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.123:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-873005" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-958254 --alsologtostderr -v=3
E0131 03:14:32.557085 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-958254 --alsologtostderr -v=3: exit status 82 (2m0.302836773s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-958254"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 03:14:32.102302 1465840 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:14:32.102507 1465840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:14:32.102519 1465840 out.go:309] Setting ErrFile to fd 2...
	I0131 03:14:32.102527 1465840 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:14:32.102746 1465840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:14:32.103038 1465840 out.go:303] Setting JSON to false
	I0131 03:14:32.103135 1465840 mustload.go:65] Loading cluster: embed-certs-958254
	I0131 03:14:32.103474 1465840 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:14:32.103541 1465840 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:14:32.103730 1465840 mustload.go:65] Loading cluster: embed-certs-958254
	I0131 03:14:32.103835 1465840 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:14:32.103860 1465840 stop.go:39] StopHost: embed-certs-958254
	I0131 03:14:32.104264 1465840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:14:32.104309 1465840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:14:32.119165 1465840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38863
	I0131 03:14:32.119684 1465840 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:14:32.120400 1465840 main.go:141] libmachine: Using API Version  1
	I0131 03:14:32.120429 1465840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:14:32.120774 1465840 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:14:32.123273 1465840 out.go:177] * Stopping node "embed-certs-958254"  ...
	I0131 03:14:32.124875 1465840 main.go:141] libmachine: Stopping "embed-certs-958254"...
	I0131 03:14:32.124901 1465840 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:14:32.126790 1465840 main.go:141] libmachine: (embed-certs-958254) Calling .Stop
	I0131 03:14:32.129993 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 0/120
	I0131 03:14:33.131727 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 1/120
	I0131 03:14:34.133166 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 2/120
	I0131 03:14:35.134958 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 3/120
	I0131 03:14:36.136334 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 4/120
	I0131 03:14:37.138424 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 5/120
	I0131 03:14:38.140185 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 6/120
	I0131 03:14:39.141752 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 7/120
	I0131 03:14:40.143206 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 8/120
	I0131 03:14:41.144708 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 9/120
	I0131 03:14:42.146501 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 10/120
	I0131 03:14:43.147970 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 11/120
	I0131 03:14:44.149408 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 12/120
	I0131 03:14:45.150836 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 13/120
	I0131 03:14:46.152980 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 14/120
	I0131 03:14:47.155403 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 15/120
	I0131 03:14:48.157013 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 16/120
	I0131 03:14:49.158390 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 17/120
	I0131 03:14:50.159814 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 18/120
	I0131 03:14:51.161366 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 19/120
	I0131 03:14:52.163738 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 20/120
	I0131 03:14:53.165242 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 21/120
	I0131 03:14:54.166936 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 22/120
	I0131 03:14:55.168330 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 23/120
	I0131 03:14:56.169898 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 24/120
	I0131 03:14:57.171807 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 25/120
	I0131 03:14:58.173317 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 26/120
	I0131 03:14:59.175014 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 27/120
	I0131 03:15:00.176687 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 28/120
	I0131 03:15:01.178540 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 29/120
	I0131 03:15:02.180114 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 30/120
	I0131 03:15:03.181533 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 31/120
	I0131 03:15:04.183199 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 32/120
	I0131 03:15:05.184488 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 33/120
	I0131 03:15:06.186080 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 34/120
	I0131 03:15:07.188436 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 35/120
	I0131 03:15:08.189762 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 36/120
	I0131 03:15:09.191046 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 37/120
	I0131 03:15:10.193210 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 38/120
	I0131 03:15:11.194722 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 39/120
	I0131 03:15:12.197074 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 40/120
	I0131 03:15:13.198512 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 41/120
	I0131 03:15:14.199997 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 42/120
	I0131 03:15:15.201608 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 43/120
	I0131 03:15:16.203089 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 44/120
	I0131 03:15:17.205242 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 45/120
	I0131 03:15:18.206852 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 46/120
	I0131 03:15:19.208522 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 47/120
	I0131 03:15:20.209920 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 48/120
	I0131 03:15:21.211481 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 49/120
	I0131 03:15:22.213711 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 50/120
	I0131 03:15:23.215134 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 51/120
	I0131 03:15:24.216610 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 52/120
	I0131 03:15:25.218116 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 53/120
	I0131 03:15:26.219657 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 54/120
	I0131 03:15:27.221659 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 55/120
	I0131 03:15:28.223057 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 56/120
	I0131 03:15:29.224616 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 57/120
	I0131 03:15:30.225928 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 58/120
	I0131 03:15:31.227378 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 59/120
	I0131 03:15:32.229930 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 60/120
	I0131 03:15:33.231232 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 61/120
	I0131 03:15:34.232787 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 62/120
	I0131 03:15:35.234102 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 63/120
	I0131 03:15:36.235621 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 64/120
	I0131 03:15:37.237962 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 65/120
	I0131 03:15:38.239358 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 66/120
	I0131 03:15:39.240893 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 67/120
	I0131 03:15:40.242288 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 68/120
	I0131 03:15:41.243807 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 69/120
	I0131 03:15:42.246037 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 70/120
	I0131 03:15:43.247556 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 71/120
	I0131 03:15:44.249000 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 72/120
	I0131 03:15:45.250502 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 73/120
	I0131 03:15:46.251760 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 74/120
	I0131 03:15:47.254135 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 75/120
	I0131 03:15:48.255701 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 76/120
	I0131 03:15:49.257031 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 77/120
	I0131 03:15:50.258802 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 78/120
	I0131 03:15:51.260261 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 79/120
	I0131 03:15:52.261800 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 80/120
	I0131 03:15:53.263338 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 81/120
	I0131 03:15:54.264730 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 82/120
	I0131 03:15:55.266038 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 83/120
	I0131 03:15:56.267450 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 84/120
	I0131 03:15:57.269686 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 85/120
	I0131 03:15:58.271174 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 86/120
	I0131 03:15:59.272665 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 87/120
	I0131 03:16:00.274149 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 88/120
	I0131 03:16:01.275935 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 89/120
	I0131 03:16:02.278396 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 90/120
	I0131 03:16:03.279950 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 91/120
	I0131 03:16:04.281227 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 92/120
	I0131 03:16:05.282627 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 93/120
	I0131 03:16:06.284032 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 94/120
	I0131 03:16:07.286211 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 95/120
	I0131 03:16:08.287686 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 96/120
	I0131 03:16:09.289009 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 97/120
	I0131 03:16:10.290476 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 98/120
	I0131 03:16:11.291948 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 99/120
	I0131 03:16:12.294206 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 100/120
	I0131 03:16:13.295498 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 101/120
	I0131 03:16:14.296817 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 102/120
	I0131 03:16:15.298182 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 103/120
	I0131 03:16:16.299769 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 104/120
	I0131 03:16:17.301985 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 105/120
	I0131 03:16:18.303456 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 106/120
	I0131 03:16:19.304872 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 107/120
	I0131 03:16:20.306101 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 108/120
	I0131 03:16:21.307502 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 109/120
	I0131 03:16:22.309835 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 110/120
	I0131 03:16:23.311219 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 111/120
	I0131 03:16:24.312698 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 112/120
	I0131 03:16:25.314014 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 113/120
	I0131 03:16:26.315691 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 114/120
	I0131 03:16:27.318231 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 115/120
	I0131 03:16:28.319882 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 116/120
	I0131 03:16:29.321385 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 117/120
	I0131 03:16:30.322992 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 118/120
	I0131 03:16:31.324455 1465840 main.go:141] libmachine: (embed-certs-958254) Waiting for machine to stop 119/120
	I0131 03:16:32.325650 1465840 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0131 03:16:32.325711 1465840 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0131 03:16:32.327921 1465840 out.go:177] 
	W0131 03:16:32.329542 1465840 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0131 03:16:32.329560 1465840 out.go:239] * 
	* 
	W0131 03:16:32.334942 1465840 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0131 03:16:32.337481 1465840 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-958254 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254
E0131 03:16:33.951679 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:16:41.530545 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:16:44.361408 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:16:47.064178 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254: exit status 3 (18.541471549s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:16:50.878937 1466271 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0131 03:16:50.878959 1466271 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-958254" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254: exit status 3 (3.199658899s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:16:54.078929 1466341 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0131 03:16:54.078963 1466341 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-958254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-958254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154052699s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-958254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254: exit status 3 (3.061152773s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0131 03:17:03.295001 1466411 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0131 03:17:03.295026 1466411 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-958254" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-625812 -n no-preload-625812
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:34:50.820514326 +0000 UTC m=+5449.461862155
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-625812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-625812 logs -n 25: (1.659247314s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-711547        | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC | 31 Jan 24 03:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:20:05 UTC, ends at Wed 2024-01-31 03:34:52 UTC. --
	Jan 31 03:34:51 no-preload-625812 crio[723]: time="2024-01-31 03:34:51.998159727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672091998147988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a922610a-ebd9-41da-ae68-379cb2aec715 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:51 no-preload-625812 crio[723]: time="2024-01-31 03:34:51.998731237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8b1e985-b5fe-45a4-9f7b-86bebcdf433f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:51 no-preload-625812 crio[723]: time="2024-01-31 03:34:51.998794061Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8b1e985-b5fe-45a4-9f7b-86bebcdf433f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:51 no-preload-625812 crio[723]: time="2024-01-31 03:34:51.998980208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8b1e985-b5fe-45a4-9f7b-86bebcdf433f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.037385403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a04afe0d-68d6-4883-8247-ec6742b361a1 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.037442576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a04afe0d-68d6-4883-8247-ec6742b361a1 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.038957494Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=60e1acf0-654b-46f4-bb3a-030d7153ab76 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.039295527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672092039282353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=60e1acf0-654b-46f4-bb3a-030d7153ab76 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.040222637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1288eac4-8228-4b16-a7a7-358505b700b5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.040309404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1288eac4-8228-4b16-a7a7-358505b700b5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.040930225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1288eac4-8228-4b16-a7a7-358505b700b5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.086937694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cc422811-ba96-410a-87e6-1251659943cb name=/runtime.v1.RuntimeService/Version
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.087012919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cc422811-ba96-410a-87e6-1251659943cb name=/runtime.v1.RuntimeService/Version
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.088803288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2c25034e-6861-4e87-9e29-20639b3ce55b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.089161439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672092089147841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=2c25034e-6861-4e87-9e29-20639b3ce55b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.089960477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=45214ef4-59d2-4ee3-874b-ba9642fa4372 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.090025138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=45214ef4-59d2-4ee3-874b-ba9642fa4372 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.090194388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=45214ef4-59d2-4ee3-874b-ba9642fa4372 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.126816900Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f571e27-3b3b-4d6e-8a66-a03f751195fe name=/runtime.v1.RuntimeService/Version
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.126933342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f571e27-3b3b-4d6e-8a66-a03f751195fe name=/runtime.v1.RuntimeService/Version
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.129501407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=921f0737-a22d-444b-b418-042dc9af5ca5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.129957574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672092129941787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=921f0737-a22d-444b-b418-042dc9af5ca5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.130723913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbb81723-8305-4330-b718-92069f631f04 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.130795925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbb81723-8305-4330-b718-92069f631f04 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:34:52 no-preload-625812 crio[723]: time="2024-01-31 03:34:52.130950062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbb81723-8305-4330-b718-92069f631f04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ccb4de319e9dc       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   c6f7afec463a0       kube-proxy-pkvj6
	4433aa1e7b647       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   484a94885270d       storage-provisioner
	7f1e547f6a32e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   85199cebf8046       coredns-76f75df574-hvxjf
	906c3b43d364f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   4b8f0fe58c28e       etcd-no-preload-625812
	5d6fe45d31ec2       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   2e920b86b8123       kube-apiserver-no-preload-625812
	31fb1f9e7e60b       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   6b364a443707c       kube-controller-manager-no-preload-625812
	6f838a7ac635d       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   72db6dc25f93d       kube-scheduler-no-preload-625812
	
	
	==> coredns [7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               no-preload-625812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-625812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=no-preload-625812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-625812
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:30:59 +0000   Wed, 31 Jan 2024 03:25:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:30:59 +0000   Wed, 31 Jan 2024 03:25:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:30:59 +0000   Wed, 31 Jan 2024 03:25:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:30:59 +0000   Wed, 31 Jan 2024 03:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.23
	  Hostname:    no-preload-625812
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a3e353dccbd4b1ab490fca2c6c6d8ff
	  System UUID:                2a3e353d-ccbd-4b1a-b490-fca2c6c6d8ff
	  Boot ID:                    398cccd6-75db-4294-9247-8c15b6816d91
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-hvxjf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-625812                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-apiserver-no-preload-625812             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-625812    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-pkvj6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-625812             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-vjnfp              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m29s (x8 over 9m29s)  kubelet          Node no-preload-625812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m29s (x8 over 9m29s)  kubelet          Node no-preload-625812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m29s (x7 over 9m29s)  kubelet          Node no-preload-625812 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-625812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-625812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-625812 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s                  kubelet          Node no-preload-625812 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s                  kubelet          Node no-preload-625812 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-625812 event: Registered Node no-preload-625812 in Controller
	
	
	==> dmesg <==
	[Jan31 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073692] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan31 03:20] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.935025] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.126568] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.622058] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615884] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.121408] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.161691] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.120255] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.225197] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[ +29.075869] systemd-fstab-generator[1335]: Ignoring "noauto" for root device
	[Jan31 03:21] kauditd_printk_skb: 29 callbacks suppressed
	[Jan31 03:25] systemd-fstab-generator[3903]: Ignoring "noauto" for root device
	[  +9.802284] systemd-fstab-generator[4235]: Ignoring "noauto" for root device
	[ +13.455711] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55] <==
	{"level":"info","ts":"2024-01-31T03:25:26.696695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 switched to configuration voters=(7499858730705815237)"}
	{"level":"info","ts":"2024-01-31T03:25:26.6971Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"64e1bcbd7b58f1a0","local-member-id":"6814d9c7955506c5","added-peer-id":"6814d9c7955506c5","added-peer-peer-urls":["https://192.168.72.23:2380"]}
	{"level":"info","ts":"2024-01-31T03:25:26.735532Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-31T03:25:26.735988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6814d9c7955506c5","initial-advertise-peer-urls":["https://192.168.72.23:2380"],"listen-peer-urls":["https://192.168.72.23:2380"],"advertise-client-urls":["https://192.168.72.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T03:25:26.735815Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.23:2380"}
	{"level":"info","ts":"2024-01-31T03:25:26.742239Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T03:25:26.742517Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.23:2380"}
	{"level":"info","ts":"2024-01-31T03:25:27.337891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T03:25:27.337964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T03:25:27.338016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 received MsgPreVoteResp from 6814d9c7955506c5 at term 1"}
	{"level":"info","ts":"2024-01-31T03:25:27.338034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.338043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 received MsgVoteResp from 6814d9c7955506c5 at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.338064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.338074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6814d9c7955506c5 elected leader 6814d9c7955506c5 at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.339445Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.340797Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6814d9c7955506c5","local-member-attributes":"{Name:no-preload-625812 ClientURLs:[https://192.168.72.23:2379]}","request-path":"/0/members/6814d9c7955506c5/attributes","cluster-id":"64e1bcbd7b58f1a0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:25:27.340867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:25:27.341484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:25:27.341683Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64e1bcbd7b58f1a0","local-member-id":"6814d9c7955506c5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.341797Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.341858Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.342941Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:25:27.343002Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:25:27.344008Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:25:27.344661Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.23:2379"}
	
	
	==> kernel <==
	 03:34:52 up 14 min,  0 users,  load average: 0.19, 0.27, 0.20
	Linux no-preload-625812 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b] <==
	I0131 03:28:47.404946       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:30:28.745735       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:30:28.746208       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0131 03:30:29.747232       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:30:29.747285       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:30:29.747294       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:30:29.747338       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:30:29.747386       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:30:29.748644       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:31:29.747837       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:31:29.747986       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:31:29.748001       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:31:29.749289       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:31:29.749449       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:31:29.749530       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:33:29.748441       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:33:29.748722       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:33:29.748808       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:33:29.749745       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:33:29.749904       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:33:29.749948       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770] <==
	I0131 03:29:15.369492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="153.906µs"
	E0131 03:29:43.967926       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:29:44.448558       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:30:13.973947       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:30:14.457863       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:30:43.979885       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:30:44.465171       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:31:13.985056       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:31:14.475278       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:31:38.371305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="346.627µs"
	E0131 03:31:43.991076       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:31:44.483490       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:31:52.370975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="135.499µs"
	E0131 03:32:13.997124       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:32:14.494368       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:32:44.002902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:32:44.503045       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:33:14.008575       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:33:14.511796       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:33:44.015430       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:33:44.528925       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:14.020932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:14.537062       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:44.025539       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:44.545859       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916] <==
	I0131 03:25:47.925582       1 server_others.go:72] "Using iptables proxy"
	I0131 03:25:47.944168       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.23"]
	I0131 03:25:47.990275       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0131 03:25:47.990376       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:25:47.990418       1 server_others.go:168] "Using iptables Proxier"
	I0131 03:25:47.994223       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:25:47.994520       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0131 03:25:47.994550       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:25:47.995767       1 config.go:188] "Starting service config controller"
	I0131 03:25:47.995811       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:25:47.995830       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:25:47.995834       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:25:47.997474       1 config.go:315] "Starting node config controller"
	I0131 03:25:47.997503       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:25:48.096000       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:25:48.096150       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:25:48.097689       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2] <==
	W0131 03:25:28.783777       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:25:28.783785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:25:28.783888       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:25:28.783901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:25:28.783991       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:25:28.784002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0131 03:25:29.618996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:25:29.619151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 03:25:29.626470       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0131 03:25:29.626541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0131 03:25:29.744274       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:25:29.744440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0131 03:25:29.803840       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:25:29.803975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:25:29.952697       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:25:29.952849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0131 03:25:30.002164       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:25:30.002293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:25:30.078485       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 03:25:30.078693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 03:25:30.100241       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:25:30.100353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:25:30.250011       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:25:30.250061       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0131 03:25:32.066241       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:20:05 UTC, ends at Wed 2024-01-31 03:34:52 UTC. --
	Jan 31 03:32:03 no-preload-625812 kubelet[4242]: E0131 03:32:03.350312    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:32:15 no-preload-625812 kubelet[4242]: E0131 03:32:15.350270    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:32:27 no-preload-625812 kubelet[4242]: E0131 03:32:27.351566    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:32:32 no-preload-625812 kubelet[4242]: E0131 03:32:32.420795    4242 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:32:32 no-preload-625812 kubelet[4242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:32:32 no-preload-625812 kubelet[4242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:32:32 no-preload-625812 kubelet[4242]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:32:40 no-preload-625812 kubelet[4242]: E0131 03:32:40.350777    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:32:55 no-preload-625812 kubelet[4242]: E0131 03:32:55.350506    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:33:10 no-preload-625812 kubelet[4242]: E0131 03:33:10.350835    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:33:21 no-preload-625812 kubelet[4242]: E0131 03:33:21.351175    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:33:32 no-preload-625812 kubelet[4242]: E0131 03:33:32.418447    4242 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:33:32 no-preload-625812 kubelet[4242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:33:32 no-preload-625812 kubelet[4242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:33:32 no-preload-625812 kubelet[4242]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:33:36 no-preload-625812 kubelet[4242]: E0131 03:33:36.352118    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:33:51 no-preload-625812 kubelet[4242]: E0131 03:33:51.350513    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:34:03 no-preload-625812 kubelet[4242]: E0131 03:34:03.350357    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:34:18 no-preload-625812 kubelet[4242]: E0131 03:34:18.351355    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:34:30 no-preload-625812 kubelet[4242]: E0131 03:34:30.351408    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:34:32 no-preload-625812 kubelet[4242]: E0131 03:34:32.420163    4242 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:34:32 no-preload-625812 kubelet[4242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:34:32 no-preload-625812 kubelet[4242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:34:32 no-preload-625812 kubelet[4242]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:34:43 no-preload-625812 kubelet[4242]: E0131 03:34:43.351536    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	
	
	==> storage-provisioner [4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d] <==
	I0131 03:25:47.849742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:25:47.861957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:25:47.862028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:25:47.886924       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:25:47.889203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f510a143-3344-4930-b9b2-dc5e181fbc36", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-625812_ba8b2ff5-e085-4ccd-bdcf-9fa5c6417682 became leader
	I0131 03:25:47.890114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-625812_ba8b2ff5-e085-4ccd-bdcf-9fa5c6417682!
	I0131 03:25:47.991173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-625812_ba8b2ff5-e085-4ccd-bdcf-9fa5c6417682!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-625812 -n no-preload-625812
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-625812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vjnfp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-625812 describe pod metrics-server-57f55c9bc5-vjnfp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-625812 describe pod metrics-server-57f55c9bc5-vjnfp: exit status 1 (64.661789ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vjnfp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-625812 describe pod metrics-server-57f55c9bc5-vjnfp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0131 03:26:41.530824 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:27:12.249032 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:27:48.510025 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 03:28:04.579045 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:28:10.634422 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:28:35.292693 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:28:38.350984 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:28:38.886045 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:29:00.516213 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-711547 -n old-k8s-version-711547
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:35:19.151812727 +0000 UTC m=+5477.793160541
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-711547 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-711547 logs -n 25: (1.66347218s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-711547        | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC | 31 Jan 24 03:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:19:04 UTC, ends at Wed 2024-01-31 03:35:20 UTC. --
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.331997514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672120331973781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1af376c3-975f-4a2f-8ed6-6fc4f4aa01db name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.332648579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9037f2de-34f1-46b5-8347-13ed8feb452d name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.332719678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9037f2de-34f1-46b5-8347-13ed8feb452d name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.332949634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1706671174463880942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9037f2de-34f1-46b5-8347-13ed8feb452d name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.370495435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4eb151d7-70cd-4e8e-ac3c-ee4f9fca9b72 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.370622080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4eb151d7-70cd-4e8e-ac3c-ee4f9fca9b72 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.372436364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5bd95943-2f65-4d6a-a8ac-d77edb4a7f85 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.372920446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672120372902517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5bd95943-2f65-4d6a-a8ac-d77edb4a7f85 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.373704318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=31524a87-5168-4b43-a0cb-e0f7c811b3da name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.373754752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=31524a87-5168-4b43-a0cb-e0f7c811b3da name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.373944495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1706671174463880942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=31524a87-5168-4b43-a0cb-e0f7c811b3da name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.410177142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=234852f3-be2e-4f2b-87dc-6f90043890fa name=/runtime.v1.RuntimeService/Version
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.410241546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=234852f3-be2e-4f2b-87dc-6f90043890fa name=/runtime.v1.RuntimeService/Version
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.412645756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aafba9ff-cb76-4b6c-8a36-c47886d3a46f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.413283005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672120413261791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aafba9ff-cb76-4b6c-8a36-c47886d3a46f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.414003818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8badc3b9-e76e-479a-85a7-c525c6a649b4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.414051419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8badc3b9-e76e-479a-85a7-c525c6a649b4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.414234507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1706671174463880942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8badc3b9-e76e-479a-85a7-c525c6a649b4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.445985572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a2ab8dd7-b6c8-4336-b703-123f4da3e56d name=/runtime.v1.RuntimeService/Version
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.446042299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a2ab8dd7-b6c8-4336-b703-123f4da3e56d name=/runtime.v1.RuntimeService/Version
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.447048249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6d67b243-ab35-4507-976f-2593ef9e9798 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.447397922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672120447385039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6d67b243-ab35-4507-976f-2593ef9e9798 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.448216359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a57c03eb-ab19-446d-9758-ddadfc823612 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.448261901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a57c03eb-ab19-446d-9758-ddadfc823612 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:35:20 old-k8s-version-711547 crio[705]: time="2024-01-31 03:35:20.448420843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1706671174463880942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a57c03eb-ab19-446d-9758-ddadfc823612 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4536b256460d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   1eb30c88799a3       storage-provisioner
	89474b25c515c       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   8bf2df2c5adaa       kube-proxy-wzft2
	d3e69cae579f0       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   04e24dd17a190       coredns-5644d7b6d9-qq7jp
	df5512b85314b       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   ce56339770d61       etcd-old-k8s-version-711547
	62e481611f29c       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   ee47637570673       kube-scheduler-old-k8s-version-711547
	f65cad251629f       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   b122af61c8f3f       kube-controller-manager-old-k8s-version-711547
	91db4b95f9102       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   f6e11203a54dd       kube-apiserver-old-k8s-version-711547
	670c449d91b90       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   f6e11203a54dd       kube-apiserver-old-k8s-version-711547
	
	
	==> coredns [d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d] <==
	.:53
	2024-01-31T03:25:02.447Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2024-01-31T03:25:02.447Z [INFO] CoreDNS-1.6.2
	2024-01-31T03:25:02.447Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2024-01-31T03:25:38.913Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               old-k8s-version-711547
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-711547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=old-k8s-version-711547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:34:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:34:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:34:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:34:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.63
	  Hostname:    old-k8s-version-711547
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 5ed24fdc301b463c9e01bc891888c917
	 System UUID:                5ed24fdc-301b-463c-9e01-bc891888c917
	 Boot ID:                    6a4b3c64-df84-40b8-a1f8-6a83b2dafacf
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qq7jp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-711547                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-apiserver-old-k8s-version-711547             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	  kube-system                kube-controller-manager-old-k8s-version-711547    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                kube-proxy-wzft2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-711547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                metrics-server-74d5856cc6-sgw75                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-711547     Node old-k8s-version-711547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-711547     Node old-k8s-version-711547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-711547     Node old-k8s-version-711547 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-711547  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan31 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063911] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan31 03:19] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.825967] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.137447] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.369020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.195460] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.106566] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.168399] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.114746] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.213085] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[ +17.764223] systemd-fstab-generator[1010]: Ignoring "noauto" for root device
	[  +0.422548] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +16.991333] kauditd_printk_skb: 3 callbacks suppressed
	[Jan31 03:20] kauditd_printk_skb: 2 callbacks suppressed
	[Jan31 03:24] systemd-fstab-generator[3104]: Ignoring "noauto" for root device
	[  +0.588175] kauditd_printk_skb: 6 callbacks suppressed
	[Jan31 03:25] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c] <==
	2024-01-31 03:24:35.119489 I | raft: 7a1fa572d5c18c56 became follower at term 0
	2024-01-31 03:24:35.119498 I | raft: newRaft 7a1fa572d5c18c56 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-31 03:24:35.119501 I | raft: 7a1fa572d5c18c56 became follower at term 1
	2024-01-31 03:24:35.127939 W | auth: simple token is not cryptographically signed
	2024-01-31 03:24:35.132411 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-31 03:24:35.133878 I | etcdserver: 7a1fa572d5c18c56 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-31 03:24:35.134507 I | etcdserver/membership: added member 7a1fa572d5c18c56 [https://192.168.50.63:2380] to cluster 77c04c1230f4f4e2
	2024-01-31 03:24:35.135323 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-31 03:24:35.135484 I | embed: listening for metrics on http://192.168.50.63:2381
	2024-01-31 03:24:35.135682 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-31 03:24:35.820151 I | raft: 7a1fa572d5c18c56 is starting a new election at term 1
	2024-01-31 03:24:35.820308 I | raft: 7a1fa572d5c18c56 became candidate at term 2
	2024-01-31 03:24:35.820348 I | raft: 7a1fa572d5c18c56 received MsgVoteResp from 7a1fa572d5c18c56 at term 2
	2024-01-31 03:24:35.820380 I | raft: 7a1fa572d5c18c56 became leader at term 2
	2024-01-31 03:24:35.820401 I | raft: raft.node: 7a1fa572d5c18c56 elected leader 7a1fa572d5c18c56 at term 2
	2024-01-31 03:24:35.820915 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-31 03:24:35.821258 I | etcdserver: published {Name:old-k8s-version-711547 ClientURLs:[https://192.168.50.63:2379]} to cluster 77c04c1230f4f4e2
	2024-01-31 03:24:35.821481 I | embed: ready to serve client requests
	2024-01-31 03:24:35.822337 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-31 03:24:35.822427 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-31 03:24:35.822463 I | embed: ready to serve client requests
	2024-01-31 03:24:35.824047 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-31 03:24:35.824822 I | embed: serving client requests on 192.168.50.63:2379
	2024-01-31 03:34:35.847010 I | mvcc: store.index: compact 663
	2024-01-31 03:34:35.849181 I | mvcc: finished scheduled compaction at 663 (took 1.600008ms)
	
	
	==> kernel <==
	 03:35:20 up 16 min,  0 users,  load average: 0.34, 0.18, 0.14
	Linux old-k8s-version-711547 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429] <==
	W0131 03:24:29.294835       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294413       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294858       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294897       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294923       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294647       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294939       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294979       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294980       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295017       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295023       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295063       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295066       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295106       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295110       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295147       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295151       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295268       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295308       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295347       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295395       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294897       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295184       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295208       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295504       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889] <==
	I0131 03:28:04.282678       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:28:04.282821       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:28:04.282893       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:28:04.282904       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:29:40.381470       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:29:40.381663       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:29:40.381842       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:29:40.381871       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:30:40.382152       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:30:40.382290       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:30:40.382333       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:30:40.382341       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:32:40.382760       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:32:40.382881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:32:40.382973       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:32:40.382984       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:34:40.383895       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:34:40.384013       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:34:40.384127       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:34:40.384154       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8] <==
	E0131 03:29:02.816469       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:29:16.780246       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:29:33.068633       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:29:48.782292       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:30:03.321044       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:30:20.784451       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:30:33.573087       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:30:52.786436       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:31:03.825729       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:31:24.788762       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:31:34.077951       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:31:56.791088       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:32:04.330056       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:32:28.792896       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:32:34.582054       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:33:00.795023       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:33:04.834062       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:33:32.797370       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:33:35.086813       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:34:04.799719       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:34:05.340037       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0131 03:34:35.591945       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:34:36.801684       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:35:05.843691       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:35:08.803426       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1] <==
	W0131 03:25:03.134003       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0131 03:25:03.142922       1 node.go:135] Successfully retrieved node IP: 192.168.50.63
	I0131 03:25:03.142999       1 server_others.go:149] Using iptables Proxier.
	I0131 03:25:03.143285       1 server.go:529] Version: v1.16.0
	I0131 03:25:03.154528       1 config.go:313] Starting service config controller
	I0131 03:25:03.154677       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0131 03:25:03.154720       1 config.go:131] Starting endpoints config controller
	I0131 03:25:03.154740       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0131 03:25:03.259773       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0131 03:25:03.260704       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0] <==
	W0131 03:24:39.348472       1 authentication.go:79] Authentication is disabled
	I0131 03:24:39.348483       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0131 03:24:39.348973       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0131 03:24:39.404513       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:39.412324       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:24:39.417888       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:39.417983       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:39.418038       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:39.418091       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:24:39.418282       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:39.418333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:24:39.418382       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:39.418524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:39.419673       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 03:24:40.411438       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:40.413633       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:24:40.419836       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:40.421045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:40.422455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:40.423699       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:24:40.426297       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:40.426982       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:24:40.427924       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:40.428692       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:40.429777       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:19:04 UTC, ends at Wed 2024-01-31 03:35:21 UTC. --
	Jan 31 03:30:44 old-k8s-version-711547 kubelet[3110]: E0131 03:30:44.274317    3110 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:30:44 old-k8s-version-711547 kubelet[3110]: E0131 03:30:44.274425    3110 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:30:44 old-k8s-version-711547 kubelet[3110]: E0131 03:30:44.274486    3110 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:30:44 old-k8s-version-711547 kubelet[3110]: E0131 03:30:44.274518    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 31 03:30:57 old-k8s-version-711547 kubelet[3110]: E0131 03:30:57.262900    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:31:10 old-k8s-version-711547 kubelet[3110]: E0131 03:31:10.264689    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:31:23 old-k8s-version-711547 kubelet[3110]: E0131 03:31:23.263627    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:31:38 old-k8s-version-711547 kubelet[3110]: E0131 03:31:38.263345    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:31:53 old-k8s-version-711547 kubelet[3110]: E0131 03:31:53.262967    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:32:08 old-k8s-version-711547 kubelet[3110]: E0131 03:32:08.264449    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:32:21 old-k8s-version-711547 kubelet[3110]: E0131 03:32:21.263381    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:32:32 old-k8s-version-711547 kubelet[3110]: E0131 03:32:32.263100    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:32:47 old-k8s-version-711547 kubelet[3110]: E0131 03:32:47.263046    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:33:01 old-k8s-version-711547 kubelet[3110]: E0131 03:33:01.263108    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:33:14 old-k8s-version-711547 kubelet[3110]: E0131 03:33:14.262905    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:33:25 old-k8s-version-711547 kubelet[3110]: E0131 03:33:25.263601    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:33:40 old-k8s-version-711547 kubelet[3110]: E0131 03:33:40.263120    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:33:53 old-k8s-version-711547 kubelet[3110]: E0131 03:33:53.262947    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:06 old-k8s-version-711547 kubelet[3110]: E0131 03:34:06.263183    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:19 old-k8s-version-711547 kubelet[3110]: E0131 03:34:19.262859    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:32 old-k8s-version-711547 kubelet[3110]: E0131 03:34:32.391883    3110 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 31 03:34:33 old-k8s-version-711547 kubelet[3110]: E0131 03:34:33.263011    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:45 old-k8s-version-711547 kubelet[3110]: E0131 03:34:45.262949    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:59 old-k8s-version-711547 kubelet[3110]: E0131 03:34:59.262916    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:35:13 old-k8s-version-711547 kubelet[3110]: E0131 03:35:13.262828    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858] <==
	I0131 03:25:05.150979       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:25:05.161301       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:25:05.161497       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:25:05.171192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:25:05.171973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"703f8f47-4881-4eaf-baa8-ff28fdfbd411", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-711547_2f9ab52a-6a01-4c60-9d14-112b31dd2894 became leader
	I0131 03:25:05.172243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-711547_2f9ab52a-6a01-4c60-9d14-112b31dd2894!
	I0131 03:25:05.272610       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-711547_2f9ab52a-6a01-4c60-9d14-112b31dd2894!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-711547 -n old-k8s-version-711547
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-711547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-sgw75
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-711547 describe pod metrics-server-74d5856cc6-sgw75
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-711547 describe pod metrics-server-74d5856cc6-sgw75: exit status 1 (67.377289ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-sgw75" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-711547 describe pod metrics-server-74d5856cc6-sgw75: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0131 03:29:11.556841 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:38:08.168236067 +0000 UTC m=+5646.809583894
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-873005 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-873005 logs -n 25: (1.637021278s)
E0131 03:38:10.633790 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-711547        | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC | 31 Jan 24 03:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:19:25 UTC, ends at Wed 2024-01-31 03:38:09 UTC. --
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.344045462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672289344026602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=45bcbfde-1c61-4bdb-9fe3-7a5384953f07 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.344485003Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb1e7c39-b637-4ceb-a86a-35d0496221d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.344554161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb1e7c39-b637-4ceb-a86a-35d0496221d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.344811646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb1e7c39-b637-4ceb-a86a-35d0496221d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.383182449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b9675cb9-aa4b-43b6-82a4-14407645d4d0 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.383258766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b9675cb9-aa4b-43b6-82a4-14407645d4d0 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.384176525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=399b2103-8ef5-4857-a0f2-65d1d9af6fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.384543555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672289384529179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=399b2103-8ef5-4857-a0f2-65d1d9af6fe8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.385142203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ad3559ff-e8f7-48e7-b72e-d3e06714eb2f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.385208293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ad3559ff-e8f7-48e7-b72e-d3e06714eb2f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.385494232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ad3559ff-e8f7-48e7-b72e-d3e06714eb2f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.421906628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ec8bf3ca-8fb0-460d-811b-11f89ed80c53 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.421991638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ec8bf3ca-8fb0-460d-811b-11f89ed80c53 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.423418640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=02dc6add-6ef7-4300-a137-de761c9188df name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.423818982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672289423805568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=02dc6add-6ef7-4300-a137-de761c9188df name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.424512387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=64aac3d4-5b0e-4db1-9f19-c37ec5ce3f2f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.424583712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=64aac3d4-5b0e-4db1-9f19-c37ec5ce3f2f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.424741534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=64aac3d4-5b0e-4db1-9f19-c37ec5ce3f2f name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.456729594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d6795fce-9f70-4593-9a9c-ce48dc7af08c name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.456805082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d6795fce-9f70-4593-9a9c-ce48dc7af08c name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.458107623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ca9c01d6-3d47-4097-9263-99f1ca3fd739 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.458489359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672289458475860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ca9c01d6-3d47-4097-9263-99f1ca3fd739 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.459010460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9ac575b3-78ba-41d8-a432-b41d9be7b4f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.459062374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9ac575b3-78ba-41d8-a432-b41d9be7b4f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:09 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:38:09.459335526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9ac575b3-78ba-41d8-a432-b41d9be7b4f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7cd76e5e503bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   81633eda4b4b6       storage-provisioner
	fc0700086e958       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   0f674b1d9d1bd       kube-proxy-blwwq
	8dc2215c9bd1d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   19645c821c222       coredns-5dd5756b68-5gdks
	bb28486f5d752       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   0115656e4008f       kube-scheduler-default-k8s-diff-port-873005
	3feac299b4d0a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   9ed56be620ecc       kube-apiserver-default-k8s-diff-port-873005
	a80c35ecce811       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   b86c8f503e84e       kube-controller-manager-default-k8s-diff-port-873005
	bc73770fd85b8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   3818469509b8c       etcd-default-k8s-diff-port-873005
	
	
	==> coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56121 - 47465 "HINFO IN 535969699749763465.3459180298032533492. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006826379s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-873005
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-873005
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=default-k8s-diff-port-873005
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:24:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-873005
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:38:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:35:09 +0000   Wed, 31 Jan 2024 03:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:35:09 +0000   Wed, 31 Jan 2024 03:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:35:09 +0000   Wed, 31 Jan 2024 03:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:35:09 +0000   Wed, 31 Jan 2024 03:24:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.123
	  Hostname:    default-k8s-diff-port-873005
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a71ae2a1a134dc1a5493b4b45b07d10
	  System UUID:                0a71ae2a-1a13-4dc1-a549-3b4b45b07d10
	  Boot ID:                    a829a32b-2296-4678-b46a-8f074f5c5437
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-5gdks                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-873005                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-873005             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-873005    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-blwwq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-873005             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-k4ht8                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-873005 event: Registered Node default-k8s-diff-port-873005 in Controller
	
	
	==> dmesg <==
	[Jan31 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064441] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.502153] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.683099] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135151] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.390510] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.277437] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.127631] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.162396] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.130556] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.248461] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.188632] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[Jan31 03:20] kauditd_printk_skb: 29 callbacks suppressed
	[Jan31 03:24] systemd-fstab-generator[3507]: Ignoring "noauto" for root device
	[  +8.791078] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[ +14.241088] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.534625] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] <==
	{"level":"info","ts":"2024-01-31T03:24:30.032723Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9b1c55f2bfc48094","initial-advertise-peer-urls":["https://192.168.61.123:2380"],"listen-peer-urls":["https://192.168.61.123:2380"],"advertise-client-urls":["https://192.168.61.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T03:24:30.032754Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T03:24:30.033009Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.123:2380"}
	{"level":"info","ts":"2024-01-31T03:24:30.033131Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.123:2380"}
	{"level":"info","ts":"2024-01-31T03:24:30.77093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:30.771116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:30.771174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 received MsgPreVoteResp from 9b1c55f2bfc48094 at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:30.771225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.77126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 received MsgVoteResp from 9b1c55f2bfc48094 at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.771299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.771333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b1c55f2bfc48094 elected leader 9b1c55f2bfc48094 at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.776138Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b1c55f2bfc48094","local-member-attributes":"{Name:default-k8s-diff-port-873005 ClientURLs:[https://192.168.61.123:2379]}","request-path":"/0/members/9b1c55f2bfc48094/attributes","cluster-id":"f7e64f166fed626b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:24:30.777928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:30.785172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:24:30.785296Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.785443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:30.792699Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.123:2379"}
	{"level":"info","ts":"2024-01-31T03:24:30.799159Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f7e64f166fed626b","local-member-id":"9b1c55f2bfc48094","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.799275Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.799322Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.799525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:24:30.79954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:34:31.190866Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-01-31T03:34:31.193756Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":722,"took":"2.113679ms","hash":3918872999}
	{"level":"info","ts":"2024-01-31T03:34:31.194053Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3918872999,"revision":722,"compact-revision":-1}
	
	
	==> kernel <==
	 03:38:09 up 18 min,  0 users,  load average: 0.11, 0.19, 0.21
	Linux default-k8s-diff-port-873005 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] <==
	I0131 03:34:33.310006       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:34:34.310342       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:34:34.310393       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:34:34.310401       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:34:34.310507       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:34:34.310728       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:34:34.311568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:35:33.139122       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:35:34.311315       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:34.311496       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:35:34.311524       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:35:34.312755       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:34.312919       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:35:34.312952       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:36:33.138519       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 03:37:33.139191       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:37:34.311951       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:37:34.312121       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:37:34.312148       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:37:34.313374       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:37:34.313505       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:37:34.313598       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] <==
	I0131 03:32:19.912219       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:32:49.449234       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:32:49.921762       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:33:19.458412       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:33:19.929777       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:33:49.465766       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:33:49.938930       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:19.472302       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:19.948692       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:49.477693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:49.958555       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:19.484132       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:19.967564       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:49.491225       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:49.977258       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:11.906400       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="298.935µs"
	E0131 03:36:19.496516       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:19.985656       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:24.908226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="113.683µs"
	E0131 03:36:49.501991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:49.994354       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:19.507911       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:20.003475       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:49.515626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:50.011748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] <==
	I0131 03:24:54.680375       1 server_others.go:69] "Using iptables proxy"
	I0131 03:24:54.707433       1 node.go:141] Successfully retrieved node IP: 192.168.61.123
	I0131 03:24:54.782499       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:24:54.782623       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:24:54.788428       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:24:54.789162       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:24:54.789502       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:24:54.789533       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:24:54.791611       1 config.go:188] "Starting service config controller"
	I0131 03:24:54.792392       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:24:54.792488       1 config.go:315] "Starting node config controller"
	I0131 03:24:54.792516       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:24:54.795022       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:24:54.795080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:24:54.896044       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:24:54.896192       1 shared_informer.go:318] Caches are synced for node config
	I0131 03:24:54.896372       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] <==
	W0131 03:24:33.350398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:33.350494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0131 03:24:33.350479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:33.350626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:24:34.234011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:24:34.234065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0131 03:24:34.316934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.316990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.363414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:34.363521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:24:34.396639       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:24:34.396748       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 03:24:34.404737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:34.405096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:24:34.407283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.407348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.480177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:34.480235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 03:24:34.526271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.526320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.536014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.536058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.623375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 03:24:34.623463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0131 03:24:36.126675       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:19:25 UTC, ends at Wed 2024-01-31 03:38:10 UTC. --
	Jan 31 03:35:36 default-k8s-diff-port-873005 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:35:36 default-k8s-diff-port-873005 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:35:46 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:35:46.887350    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:35:57 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:35:57.896935    3843 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 31 03:35:57 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:35:57.896985    3843 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 31 03:35:57 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:35:57.897197    3843 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fl5pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-k4ht8_kube-system(604feb17-6aaf-40e8-a6e6-01c899530151): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:35:57 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:35:57.897239    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:36:11 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:36:11.886255    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:36:24 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:36:24.886763    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:36:35 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:36:35.886014    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:36:36 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:36:36.967444    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:36:36 default-k8s-diff-port-873005 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:36:36 default-k8s-diff-port-873005 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:36:36 default-k8s-diff-port-873005 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:36:47 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:36:47.890154    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:37:02 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:37:02.887487    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:37:17 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:37:17.885698    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:37:28 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:37:28.885952    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:37:36 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:37:36.968621    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:37:36 default-k8s-diff-port-873005 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:37:36 default-k8s-diff-port-873005 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:37:36 default-k8s-diff-port-873005 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:37:43 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:37:43.888514    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:37:57 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:37:57.885359    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:38:08 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:38:08.886481    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	
	
	==> storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] <==
	I0131 03:24:54.843111       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:24:54.856425       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:24:54.856557       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:24:54.871411       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:24:54.873920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-873005_a9037baf-05c8-49c6-9199-5be5275f8ac8!
	I0131 03:24:54.877236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a96113e-6153-4ae4-a3a1-c6eddde8bb54", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-873005_a9037baf-05c8-49c6-9199-5be5275f8ac8 became leader
	I0131 03:24:54.974429       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-873005_a9037baf-05c8-49c6-9199-5be5275f8ac8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-k4ht8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 describe pod metrics-server-57f55c9bc5-k4ht8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-873005 describe pod metrics-server-57f55c9bc5-k4ht8: exit status 1 (66.892113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-k4ht8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-873005 describe pod metrics-server-57f55c9bc5-k4ht8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0131 03:29:33.680365 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:30:01.931389 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:30:12.029740 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:30:23.563221 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:30:25.142090 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:30:30.923965 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 03:31:35.073450 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:31:41.530961 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:31:48.186619 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:31:53.977361 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 03:32:12.249728 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:32:48.510090 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 03:33:10.633525 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:33:38.351077 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:33:38.886034 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:34:00.516617 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-958254 -n embed-certs-958254
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:38:28.189399471 +0000 UTC m=+5666.830747299
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-958254 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-958254 logs -n 25: (1.671646726s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-711547        | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC | 31 Jan 24 03:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:19:45 UTC, ends at Wed 2024-01-31 03:38:29 UTC. --
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.371498962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672309371486630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=205cc550-6eae-4c2f-b63f-98e01dc320b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.372017438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=14818678-4965-4a48-af62-57455882af02 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.372083889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=14818678-4965-4a48-af62-57455882af02 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.372310360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=14818678-4965-4a48-af62-57455882af02 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.411410127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c7891c48-b8cb-4c82-ad25-9de98e4849c2 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.411528191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c7891c48-b8cb-4c82-ad25-9de98e4849c2 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.413041167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5383b2aa-43c7-4cbb-af93-935780ab28e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.413646139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672309413627818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5383b2aa-43c7-4cbb-af93-935780ab28e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.414444246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd7d0fb5-a8c8-4366-be26-53c92f64804d name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.414494770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd7d0fb5-a8c8-4366-be26-53c92f64804d name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.414653477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd7d0fb5-a8c8-4366-be26-53c92f64804d name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.454976080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cafa8ff5-91a2-4f84-94cf-d69e1c8438c3 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.455040693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cafa8ff5-91a2-4f84-94cf-d69e1c8438c3 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.456818570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=02f74e5b-25a0-453b-b130-b29cb728e407 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.457509206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672309457487530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=02f74e5b-25a0-453b-b130-b29cb728e407 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.458320899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=edefede9-3585-4e66-a855-0f1b9e15f82c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.458522659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=edefede9-3585-4e66-a855-0f1b9e15f82c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.458719205Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=edefede9-3585-4e66-a855-0f1b9e15f82c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.499484809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fb652fd2-6829-441d-bc25-03a82c78e502 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.499540412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fb652fd2-6829-441d-bc25-03a82c78e502 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.501688955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c0d511bd-e397-41b9-a58a-870970b9663e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.502985424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672309502965492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c0d511bd-e397-41b9-a58a-870970b9663e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.503723426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=86a0ffbe-fd88-43ad-a879-3f86e16789d7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.503768706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=86a0ffbe-fd88-43ad-a879-3f86e16789d7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:38:29 embed-certs-958254 crio[702]: time="2024-01-31 03:38:29.503956329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=86a0ffbe-fd88-43ad-a879-3f86e16789d7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	31a6175cd71fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   d872a54f28ec3       storage-provisioner
	282758b49ba0b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   642e1c2de3a02       kube-proxy-2n2v5
	6327cb1857367       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   274e38d2caab4       coredns-5dd5756b68-bnt4w
	dee610ad050a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   d68b0b3c616fd       etcd-embed-certs-958254
	053f8db5e01cb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   9d8571f608d3a       kube-scheduler-embed-certs-958254
	4173b9783cb73       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   c52036d797def       kube-controller-manager-embed-certs-958254
	60fadb7138826       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   e48e454bfa0e3       kube-apiserver-embed-certs-958254
	
	
	==> coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-958254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-958254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=embed-certs-958254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:24:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-958254
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:38:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:35:31 +0000   Wed, 31 Jan 2024 03:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:35:31 +0000   Wed, 31 Jan 2024 03:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:35:31 +0000   Wed, 31 Jan 2024 03:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:35:31 +0000   Wed, 31 Jan 2024 03:24:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    embed-certs-958254
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 635c5eb7349e4485a95c285d27353b0b
	  System UUID:                635c5eb7-349e-4485-a95c-285d27353b0b
	  Boot ID:                    2db96187-effc-4aaf-ac8e-36b129cbf8c3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-bnt4w                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-958254                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-958254             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-958254    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-2n2v5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-958254             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-dj7l2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-958254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-958254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-958254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-958254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-958254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-958254 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-958254 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-958254 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-958254 event: Registered Node embed-certs-958254 in Controller
	
	
	==> dmesg <==
	[Jan31 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063264] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.529417] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.872157] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134284] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.417081] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.402692] systemd-fstab-generator[628]: Ignoring "noauto" for root device
	[  +0.133707] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.195280] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.128863] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.291918] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[Jan31 03:20] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[ +20.110355] kauditd_printk_skb: 29 callbacks suppressed
	[Jan31 03:24] systemd-fstab-generator[3458]: Ignoring "noauto" for root device
	[  +9.288294] systemd-fstab-generator[3784]: Ignoring "noauto" for root device
	[Jan31 03:25] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] <==
	{"level":"info","ts":"2024-01-31T03:24:53.189805Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T03:24:53.189811Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-31T03:24:53.196591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a switched to configuration voters=(5007548384377851754)"}
	{"level":"info","ts":"2024-01-31T03:24:53.196712Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","added-peer-id":"457e62b9766c4f6a","added-peer-peer-urls":["https://192.168.39.232:2380"]}
	{"level":"info","ts":"2024-01-31T03:24:53.341777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:53.34192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:53.341971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgPreVoteResp from 457e62b9766c4f6a at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:53.342013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.342082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgVoteResp from 457e62b9766c4f6a at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.342119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.342153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 457e62b9766c4f6a elected leader 457e62b9766c4f6a at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.343688Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"457e62b9766c4f6a","local-member-attributes":"{Name:embed-certs-958254 ClientURLs:[https://192.168.39.232:2379]}","request-path":"/0/members/457e62b9766c4f6a/attributes","cluster-id":"6f6de64b207a208a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:24:53.344369Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:53.34454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:53.34494Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:24:53.345094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:24:53.345314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:53.3496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:24:53.346177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.232:2379"}
	{"level":"info","ts":"2024-01-31T03:24:53.3554Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:53.355487Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:53.355553Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:34:53.991217Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-01-31T03:34:53.994075Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.19552ms","hash":782478356}
	{"level":"info","ts":"2024-01-31T03:34:53.994161Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":782478356,"revision":714,"compact-revision":-1}
	
	
	==> kernel <==
	 03:38:29 up 18 min,  0 users,  load average: 0.11, 0.14, 0.11
	Linux embed-certs-958254 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] <==
	I0131 03:34:55.618014       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:34:56.617868       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:34:56.617990       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:34:56.618025       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:34:56.617896       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:34:56.618183       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:34:56.619507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:35:55.513653       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:35:56.618690       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:56.618868       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:35:56.618909       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:35:56.619848       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:56.619935       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:35:56.619942       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:36:55.513887       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 03:37:55.513725       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:37:56.619881       1 handler_proxy.go:93] no RequestInfo found in the context
	W0131 03:37:56.620028       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:37:56.620118       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:37:56.620146       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0131 03:37:56.620187       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:37:56.621347       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] <==
	I0131 03:32:41.628569       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:33:11.101397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:33:11.637104       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:33:41.107725       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:33:41.651161       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:11.113727       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:11.660163       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:41.119827       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:41.668994       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:11.125667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:11.677638       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:41.132498       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:41.687154       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:08.957024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="318.403µs"
	E0131 03:36:11.139744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:11.697321       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:20.959014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="810.778µs"
	E0131 03:36:41.146900       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:41.706909       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:11.154006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:11.717143       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:41.162188       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:41.727007       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:11.168792       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:11.734763       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] <==
	I0131 03:25:14.718630       1 server_others.go:69] "Using iptables proxy"
	I0131 03:25:14.760680       1 node.go:141] Successfully retrieved node IP: 192.168.39.232
	I0131 03:25:14.862740       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:25:14.862896       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:25:14.879584       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:25:14.879696       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:25:14.880224       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:25:14.880303       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:25:14.882013       1 config.go:188] "Starting service config controller"
	I0131 03:25:14.882970       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:25:14.883021       1 config.go:315] "Starting node config controller"
	I0131 03:25:14.883030       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:25:14.890186       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:25:14.890312       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:25:14.985214       1 shared_informer.go:318] Caches are synced for node config
	I0131 03:25:14.985218       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:25:14.991518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] <==
	W0131 03:24:55.627356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:55.627852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:55.627390       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:55.627913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:24:56.457113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:56.457335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:56.520133       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:24:56.520278       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 03:24:56.523514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:56.523598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:56.686225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:56.686419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:24:56.689041       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:24:56.689099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0131 03:24:56.703743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:56.703788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 03:24:56.711766       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:56.711808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:24:56.886531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:56.886625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:56.897130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:56.897286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0131 03:24:56.913515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 03:24:56.913606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0131 03:24:59.121168       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:19:45 UTC, ends at Wed 2024-01-31 03:38:30 UTC. --
	Jan 31 03:35:53 embed-certs-958254 kubelet[3791]: E0131 03:35:53.948350    3791 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6dzzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-dj7l2_kube-system(9a313a14-a142-46ad-8b24-f8ab75f92fa5): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:35:53 embed-certs-958254 kubelet[3791]: E0131 03:35:53.948396    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:35:59 embed-certs-958254 kubelet[3791]: E0131 03:35:59.019797    3791 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:35:59 embed-certs-958254 kubelet[3791]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:35:59 embed-certs-958254 kubelet[3791]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:35:59 embed-certs-958254 kubelet[3791]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:36:08 embed-certs-958254 kubelet[3791]: E0131 03:36:08.935001    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:36:20 embed-certs-958254 kubelet[3791]: E0131 03:36:20.937129    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:36:34 embed-certs-958254 kubelet[3791]: E0131 03:36:34.935457    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:36:46 embed-certs-958254 kubelet[3791]: E0131 03:36:46.934699    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:36:59 embed-certs-958254 kubelet[3791]: E0131 03:36:59.017571    3791 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:36:59 embed-certs-958254 kubelet[3791]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:36:59 embed-certs-958254 kubelet[3791]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:36:59 embed-certs-958254 kubelet[3791]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:37:01 embed-certs-958254 kubelet[3791]: E0131 03:37:01.934367    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:37:15 embed-certs-958254 kubelet[3791]: E0131 03:37:15.935441    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:37:28 embed-certs-958254 kubelet[3791]: E0131 03:37:28.935524    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:37:40 embed-certs-958254 kubelet[3791]: E0131 03:37:40.936308    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:37:54 embed-certs-958254 kubelet[3791]: E0131 03:37:54.934748    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:37:59 embed-certs-958254 kubelet[3791]: E0131 03:37:59.019309    3791 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:37:59 embed-certs-958254 kubelet[3791]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:37:59 embed-certs-958254 kubelet[3791]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:37:59 embed-certs-958254 kubelet[3791]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:38:08 embed-certs-958254 kubelet[3791]: E0131 03:38:08.934410    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:38:23 embed-certs-958254 kubelet[3791]: E0131 03:38:23.933937    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	
	
	==> storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] <==
	I0131 03:25:15.037808       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:25:15.067714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:25:15.067812       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:25:15.117698       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:25:15.117950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-958254_f815b7d2-e7cf-4663-87f8-8d4d338bc705!
	I0131 03:25:15.120871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"151c31f5-d93d-432f-89fe-6f972c6676bb", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-958254_f815b7d2-e7cf-4663-87f8-8d4d338bc705 became leader
	I0131 03:25:15.218568       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-958254_f815b7d2-e7cf-4663-87f8-8d4d338bc705!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-958254 -n embed-certs-958254
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-958254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dj7l2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-958254 describe pod metrics-server-57f55c9bc5-dj7l2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-958254 describe pod metrics-server-57f55c9bc5-dj7l2: exit status 1 (69.904166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dj7l2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-958254 describe pod metrics-server-57f55c9bc5-dj7l2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (278.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0131 03:35:12.029178 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-625812 -n no-preload-625812
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:39:29.577179434 +0000 UTC m=+5728.218527247
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-625812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-625812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.187µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-625812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-625812 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-625812 logs -n 25: (1.616792389s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:39 UTC | 31 Jan 24 03:39 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:20:05 UTC, ends at Wed 2024-01-31 03:39:30 UTC. --
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.768282785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672370768253391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=207b0998-7423-4995-a95a-7b66a3204b80 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.768975970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a002d1a9-70ec-4960-b12f-29cbc29a3d1b name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.769046780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a002d1a9-70ec-4960-b12f-29cbc29a3d1b name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.769280956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a002d1a9-70ec-4960-b12f-29cbc29a3d1b name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.806894985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6a60d75b-6fde-404b-86c6-31e63ed752e6 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.806950564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6a60d75b-6fde-404b-86c6-31e63ed752e6 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.808510916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d69e208f-7142-4cce-a0a4-e0c254736909 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.808950862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672370808930154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d69e208f-7142-4cce-a0a4-e0c254736909 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.809790630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2dfbbb7c-b134-4fef-a26d-1c6d974eef03 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.809840868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2dfbbb7c-b134-4fef-a26d-1c6d974eef03 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.810015631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2dfbbb7c-b134-4fef-a26d-1c6d974eef03 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.845556409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1fc76e6e-e4ee-44ac-9a3d-3176d4b1caf1 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.845769493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1fc76e6e-e4ee-44ac-9a3d-3176d4b1caf1 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.847329122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ec75a34d-95fa-404d-9291-c9280cf6faf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.847757944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672370847741291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ec75a34d-95fa-404d-9291-c9280cf6faf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.848429493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5bda71a2-85b7-421a-af4a-f5efbe616348 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.848492163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5bda71a2-85b7-421a-af4a-f5efbe616348 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.848753187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5bda71a2-85b7-421a-af4a-f5efbe616348 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.885829418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=77372b5f-abc4-41c6-9c3b-f0e45c8fb7be name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.885886459Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=77372b5f-abc4-41c6-9c3b-f0e45c8fb7be name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.887413079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b9f00ccd-2be4-4f5a-bdae-31c0e3585754 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.887879881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672370887862206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=b9f00ccd-2be4-4f5a-bdae-31c0e3585754 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.888703286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3cd9b199-7f2b-4202-aa00-c632b1ecbfc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.888749877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3cd9b199-7f2b-4202-aa00-c632b1ecbfc0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:30 no-preload-625812 crio[723]: time="2024-01-31 03:39:30.888942357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916,PodSandboxId:c6f7afec463a0cb9b0d5613dab03cf5116afdf47410b042a7baa3ddf8aa5d23c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706671547631430152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkvj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83805bb8-284a-4f67-b53a-c19bf5d51b40,},Annotations:map[string]string{io.kubernetes.container.hash: 8d2e9bd9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d,PodSandboxId:484a94885270dc3e1cbb1c2f2d6e4d1365bd8c3429b4fb0d1c279fb2c9dc88e9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706671547607013454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb6c1a2-9c1e-442c-abb3-6e993cb70875,},Annotations:map[string]string{io.kubernetes.container.hash: 66c9214c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4,PodSandboxId:85199cebf804647aa6c3dff02648dfcc3303e91c73ae6cff42cb744567568c3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706671546824870791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hvxjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16747666-47f2-4cf0-85d0-0cffecb9c7a6,},Annotations:map[string]string{io.kubernetes.container.hash: 74254cf8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55,PodSandboxId:4b8f0fe58c28ec4161dd6663f89c963d58c7c33d18a7d2970d4f8303877d160e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706671524835460137,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4d4944aae9f235fb622314a14d620e5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 821c95d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b,PodSandboxId:2e920b86b8123ef8bbf2fa2fbb40273bfd8a43c971ec4d9a221da0f05021c1aa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706671524633340227,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b03d711ccb681cf0411001a27ad2efa,},Annotations:map
[string]string{io.kubernetes.container.hash: af741e94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2,PodSandboxId:72db6dc25f93d650f92199c6f48a2501ccab07bd577e1ec89f99136d65b2966e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1706671524171584635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90272abfeb358ef11870fd0e00f0291b,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770,PodSandboxId:6b364a443707c3e19e2543f645e2a97b327ad0c277dcfa09e0ad8022fea22dfa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706671524208434902,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-625812,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae9b8c032ab8631a35d6e23d51a4c137,},A
nnotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3cd9b199-7f2b-4202-aa00-c632b1ecbfc0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ccb4de319e9dc       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   c6f7afec463a0       kube-proxy-pkvj6
	4433aa1e7b647       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   484a94885270d       storage-provisioner
	7f1e547f6a32e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   85199cebf8046       coredns-76f75df574-hvxjf
	906c3b43d364f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   4b8f0fe58c28e       etcd-no-preload-625812
	5d6fe45d31ec2       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   2e920b86b8123       kube-apiserver-no-preload-625812
	31fb1f9e7e60b       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   6b364a443707c       kube-controller-manager-no-preload-625812
	6f838a7ac635d       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   72db6dc25f93d       kube-scheduler-no-preload-625812
	
	
	==> coredns [7f1e547f6a32effad8bf73cf61e4a8a2612fffaa7f50458ea24547d217bc95f4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               no-preload-625812
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-625812
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=no-preload-625812
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:25:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-625812
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:39:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:36:05 +0000   Wed, 31 Jan 2024 03:25:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:36:05 +0000   Wed, 31 Jan 2024 03:25:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:36:05 +0000   Wed, 31 Jan 2024 03:25:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:36:05 +0000   Wed, 31 Jan 2024 03:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.23
	  Hostname:    no-preload-625812
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a3e353dccbd4b1ab490fca2c6c6d8ff
	  System UUID:                2a3e353d-ccbd-4b1a-b490-fca2c6c6d8ff
	  Boot ID:                    398cccd6-75db-4294-9247-8c15b6816d91
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-hvxjf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-625812                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-625812             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-625812    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-pkvj6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-625812             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-vjnfp              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-625812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-625812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-625812 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-625812 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-625812 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-625812 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node no-preload-625812 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node no-preload-625812 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-625812 event: Registered Node no-preload-625812 in Controller
	
	
	==> dmesg <==
	[Jan31 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073692] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan31 03:20] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.935025] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.126568] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.622058] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.615884] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.121408] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.161691] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.120255] systemd-fstab-generator[684]: Ignoring "noauto" for root device
	[  +0.225197] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[ +29.075869] systemd-fstab-generator[1335]: Ignoring "noauto" for root device
	[Jan31 03:21] kauditd_printk_skb: 29 callbacks suppressed
	[Jan31 03:25] systemd-fstab-generator[3903]: Ignoring "noauto" for root device
	[  +9.802284] systemd-fstab-generator[4235]: Ignoring "noauto" for root device
	[ +13.455711] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [906c3b43d364f60b421a38cf5d4f492a4987f03ce5afa428c457ef8d0224fb55] <==
	{"level":"info","ts":"2024-01-31T03:25:26.735988Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6814d9c7955506c5","initial-advertise-peer-urls":["https://192.168.72.23:2380"],"listen-peer-urls":["https://192.168.72.23:2380"],"advertise-client-urls":["https://192.168.72.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-31T03:25:26.735815Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.23:2380"}
	{"level":"info","ts":"2024-01-31T03:25:26.742239Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-31T03:25:26.742517Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.23:2380"}
	{"level":"info","ts":"2024-01-31T03:25:27.337891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T03:25:27.337964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T03:25:27.338016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 received MsgPreVoteResp from 6814d9c7955506c5 at term 1"}
	{"level":"info","ts":"2024-01-31T03:25:27.338034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.338043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 received MsgVoteResp from 6814d9c7955506c5 at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.338064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6814d9c7955506c5 became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.338074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6814d9c7955506c5 elected leader 6814d9c7955506c5 at term 2"}
	{"level":"info","ts":"2024-01-31T03:25:27.339445Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.340797Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6814d9c7955506c5","local-member-attributes":"{Name:no-preload-625812 ClientURLs:[https://192.168.72.23:2379]}","request-path":"/0/members/6814d9c7955506c5/attributes","cluster-id":"64e1bcbd7b58f1a0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:25:27.340867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:25:27.341484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:25:27.341683Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64e1bcbd7b58f1a0","local-member-id":"6814d9c7955506c5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.341797Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.341858Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:25:27.342941Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:25:27.343002Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:25:27.344008Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:25:27.344661Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.23:2379"}
	{"level":"info","ts":"2024-01-31T03:35:27.378639Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717}
	{"level":"info","ts":"2024-01-31T03:35:27.382026Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":717,"took":"2.912681ms","hash":3714494484}
	{"level":"info","ts":"2024-01-31T03:35:27.382096Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3714494484,"revision":717,"compact-revision":-1}
	
	
	==> kernel <==
	 03:39:31 up 19 min,  0 users,  load average: 0.01, 0.11, 0.15
	Linux no-preload-625812 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [5d6fe45d31ec266d9b459b75160460b7c121624a878dc629a3d74d95e0479a4b] <==
	I0131 03:33:29.749948       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:35:28.751707       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:28.751856       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0131 03:35:29.752734       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:29.752862       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:35:29.752874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:35:29.752743       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:35:29.752915       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:35:29.753912       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:36:29.753412       1 handler_proxy.go:93] no RequestInfo found in the context
	W0131 03:36:29.754062       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:36:29.754127       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:36:29.754161       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0131 03:36:29.754124       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:36:29.755549       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:38:29.754755       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:38:29.754880       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:38:29.754894       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:38:29.755902       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:38:29.756000       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:38:29.756092       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [31fb1f9e7e60bc4d0ec7fdb068267f0696db133a4c0562af6502b873426a6770] <==
	I0131 03:33:44.528925       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:14.020932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:14.537062       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:34:44.025539       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:34:44.545859       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:14.031561       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:14.554126       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:44.038817       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:44.565919       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:36:14.044741       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:14.575365       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:36:44.050948       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:44.584713       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:48.370196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="127.846µs"
	I0131 03:37:02.372943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="92.059µs"
	E0131 03:37:14.056194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:14.593575       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:44.062560       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:44.606253       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:14.067334       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:14.615473       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:44.073116       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:44.624023       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:39:14.080807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:39:14.633135       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ccb4de319e9dc5e19ec392b390ad370af06506257760a6d6e230d9fbcb7d3916] <==
	I0131 03:25:47.925582       1 server_others.go:72] "Using iptables proxy"
	I0131 03:25:47.944168       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.23"]
	I0131 03:25:47.990275       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0131 03:25:47.990376       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:25:47.990418       1 server_others.go:168] "Using iptables Proxier"
	I0131 03:25:47.994223       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:25:47.994520       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0131 03:25:47.994550       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:25:47.995767       1 config.go:188] "Starting service config controller"
	I0131 03:25:47.995811       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:25:47.995830       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:25:47.995834       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:25:47.997474       1 config.go:315] "Starting node config controller"
	I0131 03:25:47.997503       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:25:48.096000       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:25:48.096150       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:25:48.097689       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6f838a7ac635d813c7c4dba9c5d03d88b43ac65e653067e6742fbf2d26c29ae2] <==
	W0131 03:25:28.783777       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:25:28.783785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:25:28.783888       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:25:28.783901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:25:28.783991       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:25:28.784002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0131 03:25:29.618996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:25:29.619151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 03:25:29.626470       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0131 03:25:29.626541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0131 03:25:29.744274       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:25:29.744440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0131 03:25:29.803840       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:25:29.803975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:25:29.952697       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:25:29.952849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0131 03:25:30.002164       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:25:30.002293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:25:30.078485       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 03:25:30.078693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0131 03:25:30.100241       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:25:30.100353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:25:30.250011       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:25:30.250061       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0131 03:25:32.066241       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:20:05 UTC, ends at Wed 2024-01-31 03:39:31 UTC. --
	Jan 31 03:36:34 no-preload-625812 kubelet[4242]: E0131 03:36:34.363461    4242 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 31 03:36:34 no-preload-625812 kubelet[4242]: E0131 03:36:34.363502    4242 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 31 03:36:34 no-preload-625812 kubelet[4242]: E0131 03:36:34.363767    4242 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tw7jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-vjnfp_kube-system(7227d151-55ff-45b0-a85a-090f5d6ff6f3): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:36:34 no-preload-625812 kubelet[4242]: E0131 03:36:34.363810    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:36:48 no-preload-625812 kubelet[4242]: E0131 03:36:48.351150    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:37:02 no-preload-625812 kubelet[4242]: E0131 03:37:02.352134    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:37:14 no-preload-625812 kubelet[4242]: E0131 03:37:14.352072    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:37:26 no-preload-625812 kubelet[4242]: E0131 03:37:26.356426    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:37:32 no-preload-625812 kubelet[4242]: E0131 03:37:32.419309    4242 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:37:32 no-preload-625812 kubelet[4242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:37:32 no-preload-625812 kubelet[4242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:37:32 no-preload-625812 kubelet[4242]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:37:40 no-preload-625812 kubelet[4242]: E0131 03:37:40.351726    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:37:54 no-preload-625812 kubelet[4242]: E0131 03:37:54.351083    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:38:06 no-preload-625812 kubelet[4242]: E0131 03:38:06.352308    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:38:21 no-preload-625812 kubelet[4242]: E0131 03:38:21.350433    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:38:32 no-preload-625812 kubelet[4242]: E0131 03:38:32.418967    4242 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:38:32 no-preload-625812 kubelet[4242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:38:32 no-preload-625812 kubelet[4242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:38:32 no-preload-625812 kubelet[4242]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:38:36 no-preload-625812 kubelet[4242]: E0131 03:38:36.352148    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:38:47 no-preload-625812 kubelet[4242]: E0131 03:38:47.353945    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:39:02 no-preload-625812 kubelet[4242]: E0131 03:39:02.350794    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:39:13 no-preload-625812 kubelet[4242]: E0131 03:39:13.352272    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	Jan 31 03:39:28 no-preload-625812 kubelet[4242]: E0131 03:39:28.352111    4242 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vjnfp" podUID="7227d151-55ff-45b0-a85a-090f5d6ff6f3"
	
	
	==> storage-provisioner [4433aa1e7b6474e9f9effc685fdc53e0c6a28d9dd41330ba6cd5284b1e9fd58d] <==
	I0131 03:25:47.849742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:25:47.861957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:25:47.862028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:25:47.886924       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:25:47.889203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f510a143-3344-4930-b9b2-dc5e181fbc36", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-625812_ba8b2ff5-e085-4ccd-bdcf-9fa5c6417682 became leader
	I0131 03:25:47.890114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-625812_ba8b2ff5-e085-4ccd-bdcf-9fa5c6417682!
	I0131 03:25:47.991173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-625812_ba8b2ff5-e085-4ccd-bdcf-9fa5c6417682!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-625812 -n no-preload-625812
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-625812 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vjnfp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-625812 describe pod metrics-server-57f55c9bc5-vjnfp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-625812 describe pod metrics-server-57f55c9bc5-vjnfp: exit status 1 (64.952792ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vjnfp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-625812 describe pod metrics-server-57f55c9bc5-vjnfp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (278.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (231.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0131 03:35:25.141550 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:35:30.924041 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 03:36:41.531488 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:37:12.249134 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:37:48.510397 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-711547 -n old-k8s-version-711547
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:39:11.012683121 +0000 UTC m=+5709.654030945
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-711547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-711547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.337µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-711547 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-711547 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-711547 logs -n 25: (1.671268271s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-711547        | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC | 31 Jan 24 03:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:11 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:19:04 UTC, ends at Wed 2024-01-31 03:39:12 UTC. --
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.259238458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=222a29da-ead7-4ac6-b64d-44d9d732bff4 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.260848587Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7364e248-0fed-4ae6-8725-c0947eb574d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.261420320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672352261398291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7364e248-0fed-4ae6-8725-c0947eb574d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.262657863Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{},},Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=0af7dd70-6c48-462c-939c-91d0c94092c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.262719794Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:30" id=0af7dd70-6c48-462c-939c-91d0c94092c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.262881840Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\"" file="storage/storage_transport.go:185"
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.262951952Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:147"
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.263019829Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:47" id=0af7dd70-6c48-462c-939c-91d0c94092c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.263041666Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:90" id=0af7dd70-6c48-462c-939c-91d0c94092c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.263071431Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=0af7dd70-6c48-462c-939c-91d0c94092c7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.264524636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a2b42842-8c7d-4f94-823b-c32c097946ae name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.264617598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a2b42842-8c7d-4f94-823b-c32c097946ae name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.264774904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1706671174463880942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a2b42842-8c7d-4f94-823b-c32c097946ae name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.265620168Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b282a566-1043-482d-80ba-d6acb9472b88 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.266385295Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671504721970672,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"
kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-31T03:25:02.878862745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c820233c7de60b46fe382cd2b2487f42015ce53e52d24cb18b0de8f7fbe8a852,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-sgw75,Uid:e66d5152-4065-4916-8bfa-1b78adc5c7a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671503891386735,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-sgw75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e66d5152-4065-4916-8bfa-1b78adc
5c7a2,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-31T03:25:03.548880613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qq7jp,Uid:cbb4201f-8bce-408f-a16d-57d8f91c8304,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671501085295009,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-31T03:25:00.747822573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&PodSandboxMetadata{Name:kube-proxy-wzft2,Uid:31a2844e-22c6-4184-9f2
b-5030a29dc0ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671500664295893,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-31T03:25:00.313449063Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-711547,Uid:6fa2f412fd968a1485f6450db34ac4a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671472977837933,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,tier: contr
ol-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6fa2f412fd968a1485f6450db34ac4a4,kubernetes.io/config.seen: 2024-01-31T03:24:32.463255236Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-711547,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671472971268961,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-31T03:24:32.461164995Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f6
44ece37b8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-711547,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671472958955295,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2024-01-31T03:24:32.45928925Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-711547,Uid:eee4d191ac6716d637f12544f69e50a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706671174008790389,Labels:map[string]string{component: kube-apiserver,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eee4d191ac6716d637f12544f69e50a3,kubernetes.io/config.seen: 2024-01-31T03:19:33.095967368Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b282a566-1043-482d-80ba-d6acb9472b88 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.268811073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6c3d2f1-1815-46c5-8037-45111e217f55 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.268859229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6c3d2f1-1815-46c5-8037-45111e217f55 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.269002879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6c3d2f1-1815-46c5-8037-45111e217f55 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.298303568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4bc8037b-fee5-4786-b6df-c44275f1cb93 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.298416342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4bc8037b-fee5-4786-b6df-c44275f1cb93 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.299834724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1ef26a9b-eed7-40d7-8887-c070b5e9eec2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.300389114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672352300368374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1ef26a9b-eed7-40d7-8887-c070b5e9eec2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.300994622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4457e944-cea1-4ed2-80dc-a3c9c7710ba2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.301070706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4457e944-cea1-4ed2-80dc-a3c9c7710ba2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:39:12 old-k8s-version-711547 crio[705]: time="2024-01-31 03:39:12.301243679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858,PodSandboxId:1eb30c88799a331fc2a1310e2f80ce52a271cbf6d0e98d9edf30182e61a8e477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671505052101079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b345c5ea-80fe-48c2-9a7a-f10b0cd4d482,},Annotations:map[string]string{io.kubernetes.container.hash: 9af98072,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1,PodSandboxId:8bf2df2c5adaaf24eff7217103ff9cee094e585d4a6933acfd92c599a0fbdf18,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706671502854247676,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wzft2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a2844e-22c6-4184-9f2b-5030a29dc0ec,},Annotations:map[string]string{io.kubernetes.container.hash: 66373d6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d,PodSandboxId:04e24dd17a190c4cc5ec5e34e76ca02525b48beab0d3b578e19a6bcdde29251b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706671501781771620,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qq7jp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbb4201f-8bce-408f-a16d-57d8f91c8304,},Annotations:map[string]string{io.kubernetes.container.hash: 1ec11e45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c,PodSandboxId:ce56339770d6140898815c00f590a39a236d2a3b9cfd40ca6dd92529d2fe0f9b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706671474994147157,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fa2f412fd968a1485f6450db34ac4a4,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f827b34a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0,PodSandboxId:ee476375706738ecc48a4928ca38f7f1dc2b9d1c44172dfcb7d088e6de610516,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706671473779058639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8,PodSandboxId:b122af61c8f3f8794a85acc21b60b6066e7aad749940757619944f644ece37b8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706671473660335300,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706671472928516729,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429,PodSandboxId:f6e11203a54dd454b85cfd381906f7c7ac0fa1035adc6dd351cd788f81b5d3c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1706671174463880942,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-711547,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eee4d191ac6716d637f12544f69e50a3,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 78a0acb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4457e944-cea1-4ed2-80dc-a3c9c7710ba2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4536b256460d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   1eb30c88799a3       storage-provisioner
	89474b25c515c       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   14 minutes ago      Running             kube-proxy                0                   8bf2df2c5adaa       kube-proxy-wzft2
	d3e69cae579f0       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   14 minutes ago      Running             coredns                   0                   04e24dd17a190       coredns-5644d7b6d9-qq7jp
	df5512b85314b       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   ce56339770d61       etcd-old-k8s-version-711547
	62e481611f29c       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   ee47637570673       kube-scheduler-old-k8s-version-711547
	f65cad251629f       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   b122af61c8f3f       kube-controller-manager-old-k8s-version-711547
	91db4b95f9102       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            1                   f6e11203a54dd       kube-apiserver-old-k8s-version-711547
	670c449d91b90       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   19 minutes ago      Exited              kube-apiserver            0                   f6e11203a54dd       kube-apiserver-old-k8s-version-711547
	
	
	==> coredns [d3e69cae579f0e5e03faf1d1ebf42f61c5a3fe9faff08613cd96d67da59dfd0d] <==
	.:53
	2024-01-31T03:25:02.447Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2024-01-31T03:25:02.447Z [INFO] CoreDNS-1.6.2
	2024-01-31T03:25:02.447Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2024-01-31T03:25:38.913Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               old-k8s-version-711547
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-711547
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=old-k8s-version-711547
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:38:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:38:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:38:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:38:39 +0000   Wed, 31 Jan 2024 03:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.63
	  Hostname:    old-k8s-version-711547
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 5ed24fdc301b463c9e01bc891888c917
	 System UUID:                5ed24fdc-301b-463c-9e01-bc891888c917
	 Boot ID:                    6a4b3c64-df84-40b8-a1f8-6a83b2dafacf
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qq7jp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                etcd-old-k8s-version-711547                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-apiserver-old-k8s-version-711547             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-controller-manager-old-k8s-version-711547    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-proxy-wzft2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                kube-scheduler-old-k8s-version-711547             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                metrics-server-74d5856cc6-sgw75                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-711547     Node old-k8s-version-711547 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x7 over 14m)  kubelet, old-k8s-version-711547     Node old-k8s-version-711547 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x8 over 14m)  kubelet, old-k8s-version-711547     Node old-k8s-version-711547 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kube-proxy, old-k8s-version-711547  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan31 03:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063911] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan31 03:19] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.825967] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.137447] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.369020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.195460] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.106566] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.168399] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.114746] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.213085] systemd-fstab-generator[691]: Ignoring "noauto" for root device
	[ +17.764223] systemd-fstab-generator[1010]: Ignoring "noauto" for root device
	[  +0.422548] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +16.991333] kauditd_printk_skb: 3 callbacks suppressed
	[Jan31 03:20] kauditd_printk_skb: 2 callbacks suppressed
	[Jan31 03:24] systemd-fstab-generator[3104]: Ignoring "noauto" for root device
	[  +0.588175] kauditd_printk_skb: 6 callbacks suppressed
	[Jan31 03:25] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [df5512b85314b2266a6bda771d3f9a1d08ae1ee23c06aec0a748cf16f784af4c] <==
	2024-01-31 03:24:35.119489 I | raft: 7a1fa572d5c18c56 became follower at term 0
	2024-01-31 03:24:35.119498 I | raft: newRaft 7a1fa572d5c18c56 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-31 03:24:35.119501 I | raft: 7a1fa572d5c18c56 became follower at term 1
	2024-01-31 03:24:35.127939 W | auth: simple token is not cryptographically signed
	2024-01-31 03:24:35.132411 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-31 03:24:35.133878 I | etcdserver: 7a1fa572d5c18c56 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-31 03:24:35.134507 I | etcdserver/membership: added member 7a1fa572d5c18c56 [https://192.168.50.63:2380] to cluster 77c04c1230f4f4e2
	2024-01-31 03:24:35.135323 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-31 03:24:35.135484 I | embed: listening for metrics on http://192.168.50.63:2381
	2024-01-31 03:24:35.135682 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-31 03:24:35.820151 I | raft: 7a1fa572d5c18c56 is starting a new election at term 1
	2024-01-31 03:24:35.820308 I | raft: 7a1fa572d5c18c56 became candidate at term 2
	2024-01-31 03:24:35.820348 I | raft: 7a1fa572d5c18c56 received MsgVoteResp from 7a1fa572d5c18c56 at term 2
	2024-01-31 03:24:35.820380 I | raft: 7a1fa572d5c18c56 became leader at term 2
	2024-01-31 03:24:35.820401 I | raft: raft.node: 7a1fa572d5c18c56 elected leader 7a1fa572d5c18c56 at term 2
	2024-01-31 03:24:35.820915 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-31 03:24:35.821258 I | etcdserver: published {Name:old-k8s-version-711547 ClientURLs:[https://192.168.50.63:2379]} to cluster 77c04c1230f4f4e2
	2024-01-31 03:24:35.821481 I | embed: ready to serve client requests
	2024-01-31 03:24:35.822337 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-31 03:24:35.822427 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-31 03:24:35.822463 I | embed: ready to serve client requests
	2024-01-31 03:24:35.824047 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-31 03:24:35.824822 I | embed: serving client requests on 192.168.50.63:2379
	2024-01-31 03:34:35.847010 I | mvcc: store.index: compact 663
	2024-01-31 03:34:35.849181 I | mvcc: finished scheduled compaction at 663 (took 1.600008ms)
	
	
	==> kernel <==
	 03:39:12 up 20 min,  0 users,  load average: 0.00, 0.07, 0.09
	Linux old-k8s-version-711547 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [670c449d91b909470ac5f604bae93cf22b5d857b098cb5de7e5a291618367429] <==
	W0131 03:24:29.294835       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294413       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294858       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294897       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294923       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294647       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294939       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294979       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294980       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295017       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295023       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295063       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295066       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295106       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295110       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295147       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295151       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295268       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295308       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295347       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295395       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.294897       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295184       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295208       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0131 03:24:29.295504       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [91db4b95f9102ce4d04f4534f69d7f825c5a497c849389fc3c9b52bae5910889] <==
	I0131 03:30:40.382152       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:30:40.382290       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:30:40.382333       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:30:40.382341       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:32:40.382760       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:32:40.382881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:32:40.382973       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:32:40.382984       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:34:40.383895       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:34:40.384013       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:34:40.384127       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:34:40.384154       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:35:40.384463       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:35:40.384687       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:35:40.384760       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:35:40.384772       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:37:40.385062       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0131 03:37:40.385179       1 handler_proxy.go:99] no RequestInfo found in the context
	E0131 03:37:40.385257       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:37:40.385268       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f65cad251629fc86a869cdfa15ac4e874beb1793b22a23f35ee8602d125f45f8] <==
	W0131 03:33:00.795023       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:33:04.834062       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:33:32.797370       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:33:35.086813       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:34:04.799719       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:34:05.340037       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0131 03:34:35.591945       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:34:36.801684       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:35:05.843691       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:35:08.803426       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:35:36.096303       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:35:40.805164       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:36:06.348804       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:36:12.808376       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:36:36.600770       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:36:44.810684       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:37:06.852872       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:37:16.812886       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:37:37.105293       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:37:48.814889       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:38:07.357470       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:38:20.816984       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:38:37.609454       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0131 03:38:52.818904       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0131 03:39:07.861407       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [89474b25c515ccc7c132dfe986483094ca1276f8b8157be982ea69240ca4c5f1] <==
	W0131 03:25:03.134003       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0131 03:25:03.142922       1 node.go:135] Successfully retrieved node IP: 192.168.50.63
	I0131 03:25:03.142999       1 server_others.go:149] Using iptables Proxier.
	I0131 03:25:03.143285       1 server.go:529] Version: v1.16.0
	I0131 03:25:03.154528       1 config.go:313] Starting service config controller
	I0131 03:25:03.154677       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0131 03:25:03.154720       1 config.go:131] Starting endpoints config controller
	I0131 03:25:03.154740       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0131 03:25:03.259773       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0131 03:25:03.260704       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [62e481611f29cb77afd2c1e2d755cf36bbb7df5edf8eca0217331120703733b0] <==
	W0131 03:24:39.348472       1 authentication.go:79] Authentication is disabled
	I0131 03:24:39.348483       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0131 03:24:39.348973       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0131 03:24:39.404513       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:39.412324       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:24:39.417888       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:39.417983       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:39.418038       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:39.418091       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:24:39.418282       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:39.418333       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:24:39.418382       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:39.418524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:39.419673       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 03:24:40.411438       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:40.413633       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:24:40.419836       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:40.421045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:40.422455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:40.423699       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0131 03:24:40.426297       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:40.426982       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:24:40.427924       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:40.428692       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:40.429777       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:19:04 UTC, ends at Wed 2024-01-31 03:39:12 UTC. --
	Jan 31 03:34:33 old-k8s-version-711547 kubelet[3110]: E0131 03:34:33.263011    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:45 old-k8s-version-711547 kubelet[3110]: E0131 03:34:45.262949    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:34:59 old-k8s-version-711547 kubelet[3110]: E0131 03:34:59.262916    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:35:13 old-k8s-version-711547 kubelet[3110]: E0131 03:35:13.262828    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:35:27 old-k8s-version-711547 kubelet[3110]: E0131 03:35:27.262918    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:35:41 old-k8s-version-711547 kubelet[3110]: E0131 03:35:41.262796    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:35:54 old-k8s-version-711547 kubelet[3110]: E0131 03:35:54.274029    3110 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:35:54 old-k8s-version-711547 kubelet[3110]: E0131 03:35:54.274163    3110 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:35:54 old-k8s-version-711547 kubelet[3110]: E0131 03:35:54.274212    3110 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:35:54 old-k8s-version-711547 kubelet[3110]: E0131 03:35:54.274252    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 31 03:36:09 old-k8s-version-711547 kubelet[3110]: E0131 03:36:09.263600    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:36:20 old-k8s-version-711547 kubelet[3110]: E0131 03:36:20.263772    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:36:33 old-k8s-version-711547 kubelet[3110]: E0131 03:36:33.263008    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:36:46 old-k8s-version-711547 kubelet[3110]: E0131 03:36:46.263428    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:37:01 old-k8s-version-711547 kubelet[3110]: E0131 03:37:01.263334    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:37:15 old-k8s-version-711547 kubelet[3110]: E0131 03:37:15.263094    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:37:28 old-k8s-version-711547 kubelet[3110]: E0131 03:37:28.264493    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:37:40 old-k8s-version-711547 kubelet[3110]: E0131 03:37:40.262757    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:37:53 old-k8s-version-711547 kubelet[3110]: E0131 03:37:53.262781    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:38:07 old-k8s-version-711547 kubelet[3110]: E0131 03:38:07.263059    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:38:20 old-k8s-version-711547 kubelet[3110]: E0131 03:38:20.262735    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:38:33 old-k8s-version-711547 kubelet[3110]: E0131 03:38:33.263411    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:38:46 old-k8s-version-711547 kubelet[3110]: E0131 03:38:46.263104    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:39:01 old-k8s-version-711547 kubelet[3110]: E0131 03:39:01.262805    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 31 03:39:12 old-k8s-version-711547 kubelet[3110]: E0131 03:39:12.263268    3110 pod_workers.go:191] Error syncing pod e66d5152-4065-4916-8bfa-1b78adc5c7a2 ("metrics-server-74d5856cc6-sgw75_kube-system(e66d5152-4065-4916-8bfa-1b78adc5c7a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [4536b256460d01a484389dc7907e5c6dc509dd6e4a7ae0c7baf77d5a1571a858] <==
	I0131 03:25:05.150979       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:25:05.161301       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:25:05.161497       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:25:05.171192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:25:05.171973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"703f8f47-4881-4eaf-baa8-ff28fdfbd411", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-711547_2f9ab52a-6a01-4c60-9d14-112b31dd2894 became leader
	I0131 03:25:05.172243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-711547_2f9ab52a-6a01-4c60-9d14-112b31dd2894!
	I0131 03:25:05.272610       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-711547_2f9ab52a-6a01-4c60-9d14-112b31dd2894!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-711547 -n old-k8s-version-711547
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-711547 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-sgw75
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-711547 describe pod metrics-server-74d5856cc6-sgw75
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-711547 describe pod metrics-server-74d5856cc6-sgw75: exit status 1 (72.297853ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-sgw75" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-711547 describe pod metrics-server-74d5856cc6-sgw75: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (231.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (161.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:40:50.163872671 +0000 UTC m=+5808.805220499
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-873005 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.316µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-873005 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-873005 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-873005 logs -n 25: (1.593004639s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-873005  | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:39 UTC | 31 Jan 24 03:39 UTC |
	| delete  | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:39 UTC | 31 Jan 24 03:39 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:19:25 UTC, ends at Wed 2024-01-31 03:40:51 UTC. --
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.272385636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672451272372249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aad60919-91ec-4d91-a0eb-48cf382e1e0a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.272998573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=05c1b0b0-2c3d-4071-bef2-6871eeebe70c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.273044355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=05c1b0b0-2c3d-4071-bef2-6871eeebe70c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.273209051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=05c1b0b0-2c3d-4071-bef2-6871eeebe70c name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.307766102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dff715b9-27fe-415f-991b-c9913a28e665 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.307889931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dff715b9-27fe-415f-991b-c9913a28e665 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.309013775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8787efb4-0bf1-4667-b2c6-10f89b053835 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.309458173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672451309423450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8787efb4-0bf1-4667-b2c6-10f89b053835 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.310106513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7428cb59-7014-41a2-9265-c22a4dfac869 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.310169114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7428cb59-7014-41a2-9265-c22a4dfac869 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.310343062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7428cb59-7014-41a2-9265-c22a4dfac869 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.346097499Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5c7da42c-70f2-453a-b5f3-c14900472229 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.346179084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5c7da42c-70f2-453a-b5f3-c14900472229 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.348566670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b485fb69-3cf2-4bb1-a2c9-2ade20682929 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.349070578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672451349056913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b485fb69-3cf2-4bb1-a2c9-2ade20682929 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.349637878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=17216d84-5f65-43a3-afbb-2b7f28cb0eca name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.349701234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=17216d84-5f65-43a3-afbb-2b7f28cb0eca name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.349930959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=17216d84-5f65-43a3-afbb-2b7f28cb0eca name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.382705830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9ee3d06a-a13f-434a-a055-05970483b98c name=/runtime.v1.RuntimeService/Version
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.382764225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9ee3d06a-a13f-434a-a055-05970483b98c name=/runtime.v1.RuntimeService/Version
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.384096364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f13076ef-36da-4705-9639-81e184f22768 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.384447039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672451384435756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f13076ef-36da-4705-9639-81e184f22768 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.385049449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2c62bce6-2df9-4ff9-91a5-6fe7239c6a75 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.385156337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2c62bce6-2df9-4ff9-91a5-6fe7239c6a75 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:40:51 default-k8s-diff-port-873005 crio[717]: time="2024-01-31 03:40:51.385583864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4,PodSandboxId:81633eda4b4b6493505e4cf9f1533aa0c4089bdade69ac526ce9eac8e35bbfc5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671494702297973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db68da18-b403-43a6-abdd-f3354e633a5c,},Annotations:map[string]string{io.kubernetes.container.hash: a6a96ae5,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5,PodSandboxId:0f674b1d9d1bde9f2ae0a752db7e07644cfaaa5d60f27ee7ed24251831543611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671494201005964,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-blwwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190c406e-eb21-4420-bcec-ad218ec4b760,},Annotations:map[string]string{io.kubernetes.container.hash: a97f7de4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9,PodSandboxId:19645c821c2221e773501353e9ba91f3829dd284c4017500c7b6bc3af164b66f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671493246449131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5gdks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a35e6baf-1ad9-4df7-bbb1-a2443f8c658f,},Annotations:map[string]string{io.kubernetes.container.hash: e68a6069,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e,PodSandboxId:0115656e4008f9184d3ef2b731d827d8842a25fb27bb4f1aee6a9360930e4a6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671469943028659,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 6b040e403e1f7b8f444afeddf58495ac,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd,PodSandboxId:9ed56be620ecc98e86933195a507ca808be1ef9d7d7f76a7080fac467caaa78f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671469350263892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 2905c6218f261e3cf3463b3e9b70ca0d,},Annotations:map[string]string{io.kubernetes.container.hash: 42630948,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4,PodSandboxId:b86c8f503e84e886c1d6e6ceaaa9e3deb5a207f3d59c52594af655d7cad10dbd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671469254334938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-873005,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdf9a6b1d9beb04cd73df00e42e9d441,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9,PodSandboxId:3818469509b8c50cf0b0dd0172dff260b20f3cf1435288f103662bd4c209a567,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671469043703900,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-873005,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a27c21cb939d09b9a4d98297cb64863b,},Annotations:map[string]string{io.kubernetes.container.hash: 681801f7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2c62bce6-2df9-4ff9-91a5-6fe7239c6a75 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7cd76e5e503bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   81633eda4b4b6       storage-provisioner
	fc0700086e958       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   0f674b1d9d1bd       kube-proxy-blwwq
	8dc2215c9bd1d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   19645c821c222       coredns-5dd5756b68-5gdks
	bb28486f5d752       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   0115656e4008f       kube-scheduler-default-k8s-diff-port-873005
	3feac299b4d0a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   9ed56be620ecc       kube-apiserver-default-k8s-diff-port-873005
	a80c35ecce811       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   b86c8f503e84e       kube-controller-manager-default-k8s-diff-port-873005
	bc73770fd85b8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   3818469509b8c       etcd-default-k8s-diff-port-873005
	
	
	==> coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56121 - 47465 "HINFO IN 535969699749763465.3459180298032533492. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006826379s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-873005
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-873005
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=default-k8s-diff-port-873005
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:24:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-873005
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:40:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:40:14 +0000   Wed, 31 Jan 2024 03:24:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:40:14 +0000   Wed, 31 Jan 2024 03:24:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:40:14 +0000   Wed, 31 Jan 2024 03:24:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:40:14 +0000   Wed, 31 Jan 2024 03:24:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.123
	  Hostname:    default-k8s-diff-port-873005
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a71ae2a1a134dc1a5493b4b45b07d10
	  System UUID:                0a71ae2a-1a13-4dc1-a549-3b4b45b07d10
	  Boot ID:                    a829a32b-2296-4678-b46a-8f074f5c5437
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-5gdks                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-873005                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-873005             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-873005    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-blwwq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-873005             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-k4ht8                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-873005 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-873005 event: Registered Node default-k8s-diff-port-873005 in Controller
	
	
	==> dmesg <==
	[Jan31 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064441] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.502153] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.683099] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135151] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.390510] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.277437] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.127631] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.162396] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.130556] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.248461] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.188632] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[Jan31 03:20] kauditd_printk_skb: 29 callbacks suppressed
	[Jan31 03:24] systemd-fstab-generator[3507]: Ignoring "noauto" for root device
	[  +8.791078] systemd-fstab-generator[3836]: Ignoring "noauto" for root device
	[ +14.241088] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.534625] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] <==
	{"level":"info","ts":"2024-01-31T03:24:30.033131Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.123:2380"}
	{"level":"info","ts":"2024-01-31T03:24:30.77093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:30.771116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:30.771174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 received MsgPreVoteResp from 9b1c55f2bfc48094 at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:30.771225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.77126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 received MsgVoteResp from 9b1c55f2bfc48094 at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.771299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b1c55f2bfc48094 became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.771333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b1c55f2bfc48094 elected leader 9b1c55f2bfc48094 at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:30.776138Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b1c55f2bfc48094","local-member-attributes":"{Name:default-k8s-diff-port-873005 ClientURLs:[https://192.168.61.123:2379]}","request-path":"/0/members/9b1c55f2bfc48094/attributes","cluster-id":"f7e64f166fed626b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:24:30.777928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:30.785172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:24:30.785296Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.785443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:30.792699Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.123:2379"}
	{"level":"info","ts":"2024-01-31T03:24:30.799159Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f7e64f166fed626b","local-member-id":"9b1c55f2bfc48094","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.799275Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.799322Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:30.799525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:24:30.79954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:34:31.190866Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-01-31T03:34:31.193756Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":722,"took":"2.113679ms","hash":3918872999}
	{"level":"info","ts":"2024-01-31T03:34:31.194053Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3918872999,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-01-31T03:39:31.199034Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"info","ts":"2024-01-31T03:39:31.201153Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":965,"took":"1.574387ms","hash":582442858}
	{"level":"info","ts":"2024-01-31T03:39:31.201257Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":582442858,"revision":965,"compact-revision":722}
	
	
	==> kernel <==
	 03:40:51 up 21 min,  0 users,  load average: 0.19, 0.22, 0.21
	Linux default-k8s-diff-port-873005 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] <==
	W0131 03:37:34.313374       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:37:34.313505       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:37:34.313598       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:38:33.139284       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 03:39:33.139745       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:39:33.314598       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:39:33.314890       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:39:33.315454       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:39:34.315308       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:39:34.315464       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:39:34.315493       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:39:34.315642       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:39:34.315745       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:39:34.316898       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:40:33.139324       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:40:34.316354       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:40:34.316455       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:40:34.316484       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:40:34.317509       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:40:34.317590       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:40:34.317605       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] <==
	I0131 03:35:19.967564       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:35:49.491225       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:35:49.977258       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:11.906400       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="298.935µs"
	E0131 03:36:19.496516       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:19.985656       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:36:24.908226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="113.683µs"
	E0131 03:36:49.501991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:36:49.994354       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:19.507911       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:20.003475       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:37:49.515626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:37:50.011748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:19.522095       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:20.022482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:49.527975       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:50.031674       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:39:19.534006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:39:20.040254       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:39:49.540611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:39:50.048666       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:40:19.546249       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:40:20.057713       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:40:49.552145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:40:50.067929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] <==
	I0131 03:24:54.680375       1 server_others.go:69] "Using iptables proxy"
	I0131 03:24:54.707433       1 node.go:141] Successfully retrieved node IP: 192.168.61.123
	I0131 03:24:54.782499       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:24:54.782623       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:24:54.788428       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:24:54.789162       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:24:54.789502       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:24:54.789533       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:24:54.791611       1 config.go:188] "Starting service config controller"
	I0131 03:24:54.792392       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:24:54.792488       1 config.go:315] "Starting node config controller"
	I0131 03:24:54.792516       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:24:54.795022       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:24:54.795080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:24:54.896044       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0131 03:24:54.896192       1 shared_informer.go:318] Caches are synced for node config
	I0131 03:24:54.896372       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] <==
	W0131 03:24:33.350398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:33.350494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0131 03:24:33.350479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:33.350626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:24:34.234011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0131 03:24:34.234065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0131 03:24:34.316934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.316990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.363414       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:34.363521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:24:34.396639       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:24:34.396748       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 03:24:34.404737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:34.405096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:24:34.407283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.407348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.480177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:34.480235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 03:24:34.526271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.526320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.536014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:34.536058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:34.623375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0131 03:24:34.623463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0131 03:24:36.126675       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:19:25 UTC, ends at Wed 2024-01-31 03:40:51 UTC. --
	Jan 31 03:38:19 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:38:19.884936    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:38:34 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:38:34.886539    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:38:36 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:38:36.972065    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:38:36 default-k8s-diff-port-873005 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:38:36 default-k8s-diff-port-873005 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:38:36 default-k8s-diff-port-873005 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:38:45 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:38:45.885433    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:39:00 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:00.886148    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:39:12 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:12.887156    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:39:25 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:25.886062    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:39:36 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:36.969608    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:39:36 default-k8s-diff-port-873005 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:39:36 default-k8s-diff-port-873005 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:39:36 default-k8s-diff-port-873005 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:39:37 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:37.142343    3843 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 31 03:39:38 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:38.886981    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:39:53 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:39:53.885764    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:40:04 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:40:04.886748    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:40:17 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:40:17.885867    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:40:32 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:40:32.885329    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	Jan 31 03:40:36 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:40:36.967485    3843 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:40:36 default-k8s-diff-port-873005 kubelet[3843]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:40:36 default-k8s-diff-port-873005 kubelet[3843]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:40:36 default-k8s-diff-port-873005 kubelet[3843]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:40:45 default-k8s-diff-port-873005 kubelet[3843]: E0131 03:40:45.885270    3843 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-k4ht8" podUID="604feb17-6aaf-40e8-a6e6-01c899530151"
	
	
	==> storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] <==
	I0131 03:24:54.843111       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:24:54.856425       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:24:54.856557       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:24:54.871411       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:24:54.873920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-873005_a9037baf-05c8-49c6-9199-5be5275f8ac8!
	I0131 03:24:54.877236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a96113e-6153-4ae4-a3a1-c6eddde8bb54", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-873005_a9037baf-05c8-49c6-9199-5be5275f8ac8 became leader
	I0131 03:24:54.974429       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-873005_a9037baf-05c8-49c6-9199-5be5275f8ac8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-k4ht8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 describe pod metrics-server-57f55c9bc5-k4ht8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-873005 describe pod metrics-server-57f55c9bc5-k4ht8: exit status 1 (64.717036ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-k4ht8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-873005 describe pod metrics-server-57f55c9bc5-k4ht8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (161.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (288.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0131 03:38:38.351108 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:38:38.885971 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:39:00.516364 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-958254 -n embed-certs-958254
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-31 03:43:17.183844435 +0000 UTC m=+5955.825192261
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-958254 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-958254 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.869µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-958254 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-958254 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-958254 logs -n 25: (1.577542901s)
E0131 03:43:19.211927 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC |                     |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-229073             | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-229073                  | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-229073 --memory=2200 --alsologtostderr   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:12 UTC | 31 Jan 24 03:13 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-229073 image list                           | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p newest-cni-229073                                   | newest-cni-229073            | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-096443 | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:13 UTC |
	|         | disable-driver-mounts-096443                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:13 UTC | 31 Jan 24 03:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-625812                  | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:25 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-711547             | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-873005       | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-958254            | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:14 UTC | 31 Jan 24 03:29 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-958254                 | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-958254                                  | embed-certs-958254           | jenkins | v1.32.0 | 31 Jan 24 03:17 UTC | 31 Jan 24 03:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-711547                              | old-k8s-version-711547       | jenkins | v1.32.0 | 31 Jan 24 03:39 UTC | 31 Jan 24 03:39 UTC |
	| delete  | -p no-preload-625812                                   | no-preload-625812            | jenkins | v1.32.0 | 31 Jan 24 03:39 UTC | 31 Jan 24 03:39 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-873005 | jenkins | v1.32.0 | 31 Jan 24 03:40 UTC | 31 Jan 24 03:40 UTC |
	|         | default-k8s-diff-port-873005                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 03:17:03
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 03:17:03.356553 1466459 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:17:03.356722 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356731 1466459 out.go:309] Setting ErrFile to fd 2...
	I0131 03:17:03.356736 1466459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:17:03.356921 1466459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:17:03.357497 1466459 out.go:303] Setting JSON to false
	I0131 03:17:03.358564 1466459 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":28767,"bootTime":1706642257,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:17:03.358632 1466459 start.go:138] virtualization: kvm guest
	I0131 03:17:03.361346 1466459 out.go:177] * [embed-certs-958254] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:17:03.363037 1466459 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:17:03.363052 1466459 notify.go:220] Checking for updates...
	I0131 03:17:03.364655 1466459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:17:03.366388 1466459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:17:03.368086 1466459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:17:03.369351 1466459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:17:03.370735 1466459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:17:03.372623 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:17:03.373004 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.373116 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.388091 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0131 03:17:03.388612 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.389200 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.389224 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.389606 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.389816 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.390157 1466459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:17:03.390631 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:17:03.390696 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:17:03.407513 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0131 03:17:03.408013 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:17:03.408552 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:17:03.408578 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:17:03.408936 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:17:03.409175 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:17:03.446580 1466459 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 03:17:03.447834 1466459 start.go:298] selected driver: kvm2
	I0131 03:17:03.447850 1466459 start.go:902] validating driver "kvm2" against &{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.447974 1466459 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:17:03.448798 1466459 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.448929 1466459 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 03:17:03.464292 1466459 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 03:17:03.464713 1466459 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0131 03:17:03.464803 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:17:03.464821 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:17:03.464840 1466459 start_flags.go:321] config:
	{Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:17:03.465034 1466459 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 03:17:03.466926 1466459 out.go:177] * Starting control plane node embed-certs-958254 in cluster embed-certs-958254
	I0131 03:17:03.166851 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:03.468094 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:17:03.468158 1466459 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 03:17:03.468179 1466459 cache.go:56] Caching tarball of preloaded images
	I0131 03:17:03.468267 1466459 preload.go:174] Found /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0131 03:17:03.468280 1466459 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 03:17:03.468422 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:17:03.468675 1466459 start.go:365] acquiring machines lock for embed-certs-958254: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:17:09.246814 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:12.318761 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:18.398731 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:21.470788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:27.550785 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:30.622804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:36.702802 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:39.774755 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:45.854764 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:48.926773 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:55.006804 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:17:58.078768 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:04.158801 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:07.230749 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:13.310800 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:16.382788 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:22.462833 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:25.534734 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:31.614821 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:34.686831 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:40.766796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:43.838796 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:49.918807 1465496 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.23:22: connect: no route to host
	I0131 03:18:52.923102 1465727 start.go:369] acquired machines lock for "old-k8s-version-711547" in 4m24.328353275s
	I0131 03:18:52.923156 1465727 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:18:52.923163 1465727 fix.go:54] fixHost starting: 
	I0131 03:18:52.923502 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:18:52.923535 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:18:52.938858 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0131 03:18:52.939426 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:18:52.939966 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:18:52.939993 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:18:52.940435 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:18:52.940700 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:18:52.940890 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:18:52.942694 1465727 fix.go:102] recreateIfNeeded on old-k8s-version-711547: state=Stopped err=<nil>
	I0131 03:18:52.942735 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	W0131 03:18:52.942937 1465727 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:18:52.944846 1465727 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-711547" ...
	I0131 03:18:52.946449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Start
	I0131 03:18:52.946661 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring networks are active...
	I0131 03:18:52.947481 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network default is active
	I0131 03:18:52.947856 1465727 main.go:141] libmachine: (old-k8s-version-711547) Ensuring network mk-old-k8s-version-711547 is active
	I0131 03:18:52.948334 1465727 main.go:141] libmachine: (old-k8s-version-711547) Getting domain xml...
	I0131 03:18:52.949108 1465727 main.go:141] libmachine: (old-k8s-version-711547) Creating domain...
	I0131 03:18:52.920695 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:18:52.920763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:18:52.922905 1465496 machine.go:91] provisioned docker machine in 4m37.358485704s
	I0131 03:18:52.922986 1465496 fix.go:56] fixHost completed within 4m37.381896689s
	I0131 03:18:52.922997 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 4m37.381936859s
	W0131 03:18:52.923026 1465496 start.go:694] error starting host: provision: host is not running
	W0131 03:18:52.923126 1465496 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0131 03:18:52.923138 1465496 start.go:709] Will try again in 5 seconds ...
	I0131 03:18:54.170545 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting to get IP...
	I0131 03:18:54.171580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.171974 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.172053 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.171968 1467209 retry.go:31] will retry after 195.285731ms: waiting for machine to come up
	I0131 03:18:54.368768 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.369288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.369325 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.369224 1467209 retry.go:31] will retry after 291.163288ms: waiting for machine to come up
	I0131 03:18:54.661822 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:54.662222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:54.662266 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:54.662214 1467209 retry.go:31] will retry after 396.125436ms: waiting for machine to come up
	I0131 03:18:55.059613 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.060062 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.060099 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.060009 1467209 retry.go:31] will retry after 609.786973ms: waiting for machine to come up
	I0131 03:18:55.671954 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:55.672388 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:55.672431 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:55.672334 1467209 retry.go:31] will retry after 716.179011ms: waiting for machine to come up
	I0131 03:18:56.390239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:56.390632 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:56.390667 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:56.390568 1467209 retry.go:31] will retry after 881.998023ms: waiting for machine to come up
	I0131 03:18:57.274841 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:57.275260 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:57.275293 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:57.275202 1467209 retry.go:31] will retry after 1.172177257s: waiting for machine to come up
	I0131 03:18:58.449291 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:58.449814 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:58.449869 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:58.449774 1467209 retry.go:31] will retry after 1.046487536s: waiting for machine to come up
	I0131 03:18:57.925392 1465496 start.go:365] acquiring machines lock for no-preload-625812: {Name:mk64880f03840b1c8a171c5238f562a855fdae98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0131 03:18:59.498215 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:18:59.498699 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:18:59.498739 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:18:59.498640 1467209 retry.go:31] will retry after 1.563889217s: waiting for machine to come up
	I0131 03:19:01.063580 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:01.064137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:01.064179 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:01.064063 1467209 retry.go:31] will retry after 2.225514736s: waiting for machine to come up
	I0131 03:19:03.290747 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:03.291285 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:03.291322 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:03.291205 1467209 retry.go:31] will retry after 2.011947032s: waiting for machine to come up
	I0131 03:19:05.305574 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:05.306072 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:05.306106 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:05.306012 1467209 retry.go:31] will retry after 3.104285698s: waiting for machine to come up
	I0131 03:19:08.411557 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:08.412028 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | unable to find current IP address of domain old-k8s-version-711547 in network mk-old-k8s-version-711547
	I0131 03:19:08.412054 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | I0131 03:19:08.411975 1467209 retry.go:31] will retry after 4.201966677s: waiting for machine to come up
	I0131 03:19:12.618299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.618866 1465727 main.go:141] libmachine: (old-k8s-version-711547) Found IP for machine: 192.168.50.63
	I0131 03:19:12.618893 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserving static IP address...
	I0131 03:19:12.618913 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has current primary IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.619364 1465727 main.go:141] libmachine: (old-k8s-version-711547) Reserved static IP address: 192.168.50.63
	I0131 03:19:12.619389 1465727 main.go:141] libmachine: (old-k8s-version-711547) Waiting for SSH to be available...
	I0131 03:19:12.619414 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.619452 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | skip adding static IP to network mk-old-k8s-version-711547 - found existing host DHCP lease matching {name: "old-k8s-version-711547", mac: "52:54:00:1b:2a:99", ip: "192.168.50.63"}
	I0131 03:19:12.619471 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Getting to WaitForSSH function...
	I0131 03:19:12.621473 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621783 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.621805 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.621891 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH client type: external
	I0131 03:19:12.621934 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa (-rw-------)
	I0131 03:19:12.621965 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:12.621977 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | About to run SSH command:
	I0131 03:19:12.621987 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | exit 0
	I0131 03:19:12.718254 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:12.718659 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetConfigRaw
	I0131 03:19:12.719369 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:12.722134 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722588 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.722611 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.722906 1465727 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/config.json ...
	I0131 03:19:12.723101 1465727 machine.go:88] provisioning docker machine ...
	I0131 03:19:12.723121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:12.723399 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723611 1465727 buildroot.go:166] provisioning hostname "old-k8s-version-711547"
	I0131 03:19:12.723630 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:12.723795 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.726052 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726463 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.726507 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.726656 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.726832 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727022 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.727122 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.727283 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.727665 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.727680 1465727 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-711547 && echo "old-k8s-version-711547" | sudo tee /etc/hostname
	I0131 03:19:12.870818 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-711547
	
	I0131 03:19:12.870872 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:12.873799 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874205 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:12.874242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:12.874355 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:12.874585 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874774 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:12.874920 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:12.875079 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:12.875412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:12.875428 1465727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-711547' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-711547/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-711547' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:13.014386 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:13.014419 1465727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:13.014447 1465727 buildroot.go:174] setting up certificates
	I0131 03:19:13.014460 1465727 provision.go:83] configureAuth start
	I0131 03:19:13.014471 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetMachineName
	I0131 03:19:13.014821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:13.017730 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018105 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.018149 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.018286 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.020361 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020680 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.020707 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.020896 1465727 provision.go:138] copyHostCerts
	I0131 03:19:13.020961 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:13.020975 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:13.021069 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:13.021199 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:13.021212 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:13.021252 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:13.021393 1465727 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:13.021404 1465727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:13.021442 1465727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:13.021512 1465727 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-711547 san=[192.168.50.63 192.168.50.63 localhost 127.0.0.1 minikube old-k8s-version-711547]
	I0131 03:19:13.265370 1465727 provision.go:172] copyRemoteCerts
	I0131 03:19:13.265438 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:13.265466 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.268546 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269055 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.269090 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.269281 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.269518 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.269688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.269849 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.362848 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:13.384287 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0131 03:19:13.405813 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:19:13.427630 1465727 provision.go:86] duration metric: configureAuth took 413.151329ms
	I0131 03:19:13.427671 1465727 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:13.427880 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:19:13.427963 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.430829 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431239 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.431299 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.431515 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.431771 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.431939 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.432092 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.432256 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.432619 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.432638 1465727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:14.011257 1465898 start.go:369] acquired machines lock for "default-k8s-diff-port-873005" in 4m34.419162413s
	I0131 03:19:14.011330 1465898 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:14.011340 1465898 fix.go:54] fixHost starting: 
	I0131 03:19:14.011729 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:14.011767 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:14.028941 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0131 03:19:14.029399 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:14.029937 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:19:14.029968 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:14.030321 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:14.030510 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:14.030692 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:19:14.032290 1465898 fix.go:102] recreateIfNeeded on default-k8s-diff-port-873005: state=Stopped err=<nil>
	I0131 03:19:14.032322 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	W0131 03:19:14.032499 1465898 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:14.034263 1465898 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-873005" ...
	I0131 03:19:14.035857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Start
	I0131 03:19:14.036028 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring networks are active...
	I0131 03:19:14.036734 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network default is active
	I0131 03:19:14.037140 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Ensuring network mk-default-k8s-diff-port-873005 is active
	I0131 03:19:14.037572 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Getting domain xml...
	I0131 03:19:14.038254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Creating domain...
	I0131 03:19:13.745584 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:13.745630 1465727 machine.go:91] provisioned docker machine in 1.02251207s
	I0131 03:19:13.745646 1465727 start.go:300] post-start starting for "old-k8s-version-711547" (driver="kvm2")
	I0131 03:19:13.745663 1465727 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:13.745688 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:13.746069 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:13.746100 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.748837 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749259 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.749309 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.749489 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.749691 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.749848 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.749999 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:13.844423 1465727 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:13.848230 1465727 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:13.848263 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:13.848346 1465727 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:13.848431 1465727 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:13.848517 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:13.857046 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:13.877753 1465727 start.go:303] post-start completed in 132.085834ms
	I0131 03:19:13.877806 1465727 fix.go:56] fixHost completed within 20.954639604s
	I0131 03:19:13.877836 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:13.880627 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.880914 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:13.880948 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:13.881168 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:13.881401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881594 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:13.881802 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:13.882012 1465727 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:13.882412 1465727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.63 22 <nil> <nil>}
	I0131 03:19:13.882424 1465727 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:14.011062 1465727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671153.963761136
	
	I0131 03:19:14.011098 1465727 fix.go:206] guest clock: 1706671153.963761136
	I0131 03:19:14.011111 1465727 fix.go:219] Guest: 2024-01-31 03:19:13.963761136 +0000 UTC Remote: 2024-01-31 03:19:13.877812082 +0000 UTC m=+285.451358106 (delta=85.949054ms)
	I0131 03:19:14.011141 1465727 fix.go:190] guest clock delta is within tolerance: 85.949054ms
	I0131 03:19:14.011149 1465727 start.go:83] releasing machines lock for "old-k8s-version-711547", held for 21.088010365s
	I0131 03:19:14.011234 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.011556 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:14.014323 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014754 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.014790 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.014966 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015623 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015846 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:19:14.015953 1465727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:14.016017 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.016087 1465727 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:14.016121 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:19:14.018767 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019063 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019147 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019185 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019338 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019422 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:14.019450 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:14.019500 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:19:14.019693 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.019775 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:19:14.019854 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.019952 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:19:14.020096 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:19:14.111280 1465727 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:14.148710 1465727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:14.287476 1465727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:14.293232 1465727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:14.293309 1465727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:14.306910 1465727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:14.306939 1465727 start.go:475] detecting cgroup driver to use...
	I0131 03:19:14.307001 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:14.325824 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:14.339835 1465727 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:14.339908 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:14.354064 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:14.367342 1465727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:14.476462 1465727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:14.602643 1465727 docker.go:233] disabling docker service ...
	I0131 03:19:14.602711 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:14.618228 1465727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:14.630450 1465727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:14.758176 1465727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:14.870949 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:14.882268 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:14.898622 1465727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0131 03:19:14.898685 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.907377 1465727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:14.907470 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.915868 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.924046 1465727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:14.932324 1465727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:14.941046 1465727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:14.949134 1465727 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:14.949196 1465727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:14.965561 1465727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:14.973790 1465727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:15.078782 1465727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:15.239650 1465727 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:15.239735 1465727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:15.244418 1465727 start.go:543] Will wait 60s for crictl version
	I0131 03:19:15.244501 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:15.247984 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:15.287716 1465727 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:15.287827 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.339818 1465727 ssh_runner.go:195] Run: crio --version
	I0131 03:19:15.393318 1465727 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0131 03:19:15.394911 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetIP
	I0131 03:19:15.397888 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398288 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:19:15.398313 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:19:15.398637 1465727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:15.402865 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:15.414268 1465727 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 03:19:15.414361 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:15.460589 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:15.460676 1465727 ssh_runner.go:195] Run: which lz4
	I0131 03:19:15.464663 1465727 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:15.468694 1465727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:15.468728 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0131 03:19:17.115892 1465727 crio.go:444] Took 1.651263 seconds to copy over tarball
	I0131 03:19:17.115979 1465727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:15.308732 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting to get IP...
	I0131 03:19:15.309704 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310121 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.310199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.310092 1467325 retry.go:31] will retry after 215.51674ms: waiting for machine to come up
	I0131 03:19:15.527614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528155 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.528192 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.528108 1467325 retry.go:31] will retry after 346.07944ms: waiting for machine to come up
	I0131 03:19:15.875792 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876340 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:15.876375 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:15.876290 1467325 retry.go:31] will retry after 476.08407ms: waiting for machine to come up
	I0131 03:19:16.353712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.354323 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.354196 1467325 retry.go:31] will retry after 382.739917ms: waiting for machine to come up
	I0131 03:19:16.738958 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739534 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:16.739566 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:16.739504 1467325 retry.go:31] will retry after 511.138171ms: waiting for machine to come up
	I0131 03:19:17.252373 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252862 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:17.252902 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:17.252798 1467325 retry.go:31] will retry after 879.985444ms: waiting for machine to come up
	I0131 03:19:18.134757 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135287 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:18.135313 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:18.135233 1467325 retry.go:31] will retry after 1.043236668s: waiting for machine to come up
	I0131 03:19:19.179844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180339 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:19.180369 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:19.180288 1467325 retry.go:31] will retry after 1.296129808s: waiting for machine to come up
	I0131 03:19:19.822171 1465727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.706149181s)
	I0131 03:19:19.822217 1465727 crio.go:451] Took 2.706292 seconds to extract the tarball
	I0131 03:19:19.822233 1465727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:19.861493 1465727 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:19.905950 1465727 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0131 03:19:19.905979 1465727 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:19:19.906033 1465727 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.906061 1465727 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.906080 1465727 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.906077 1465727 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.906094 1465727 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:19.906099 1465727 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.906111 1465727 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0131 03:19:19.906179 1465727 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907636 1465727 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:19.907728 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:19.907746 1465727 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:19.907750 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:19.907749 1465727 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:19.907783 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:19.907805 1465727 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0131 03:19:19.907807 1465727 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.091717 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.119185 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.132448 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.140199 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0131 03:19:20.146177 1465727 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0131 03:19:20.146263 1465727 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.146324 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.206757 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.216932 1465727 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0131 03:19:20.216985 1465727 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.217082 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219340 1465727 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0131 03:19:20.219367 1465727 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0131 03:19:20.219390 1465727 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.219408 1465727 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.219432 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.219449 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.222519 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.241389 1465727 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0131 03:19:20.241449 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0131 03:19:20.241452 1465727 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0131 03:19:20.241566 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.293129 1465727 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0131 03:19:20.293183 1465727 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.293213 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0131 03:19:20.293262 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0131 03:19:20.293284 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0131 03:19:20.293232 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321447 1465727 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0131 03:19:20.321512 1465727 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.321576 1465727 ssh_runner.go:195] Run: which crictl
	I0131 03:19:20.321605 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0131 03:19:20.321743 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0131 03:19:20.401651 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0131 03:19:20.401720 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0131 03:19:20.401731 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0131 03:19:20.401793 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0131 03:19:20.401872 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0131 03:19:20.401945 1465727 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0131 03:19:20.439360 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0131 03:19:20.449635 1465727 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0131 03:19:20.765201 1465727 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:19:20.911818 1465727 cache_images.go:92] LoadImages completed in 1.005820808s
	W0131 03:19:20.911923 1465727 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0131 03:19:20.912019 1465727 ssh_runner.go:195] Run: crio config
	I0131 03:19:20.978267 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:20.978296 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:20.978318 1465727 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:20.978361 1465727 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.63 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-711547 NodeName:old-k8s-version-711547 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0131 03:19:20.978540 1465727 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-711547"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.63
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.63"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-711547
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.63:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:20.978635 1465727 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-711547 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:19:20.978690 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0131 03:19:20.988177 1465727 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:20.988281 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:20.999558 1465727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0131 03:19:21.018567 1465727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:21.036137 1465727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0131 03:19:21.051742 1465727 ssh_runner.go:195] Run: grep 192.168.50.63	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:21.056334 1465727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:21.068635 1465727 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547 for IP: 192.168.50.63
	I0131 03:19:21.068670 1465727 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:21.068847 1465727 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:21.068894 1465727 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:21.069089 1465727 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/client.key
	I0131 03:19:21.069185 1465727 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key.1519f60b
	I0131 03:19:21.069262 1465727 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key
	I0131 03:19:21.069418 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:21.069460 1465727 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:21.069476 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:21.069517 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:21.069556 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:21.069595 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:21.069658 1465727 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:21.070416 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:21.096160 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:21.119906 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:21.144478 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/old-k8s-version-711547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:21.169174 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:21.191807 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:21.215673 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:21.237705 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:21.262763 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:21.284935 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:21.306372 1465727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:21.327718 1465727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:21.343219 1465727 ssh_runner.go:195] Run: openssl version
	I0131 03:19:21.348904 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:21.358119 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362537 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.362619 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:21.368555 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:21.378236 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:21.387651 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392087 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.392155 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:21.397511 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:21.406631 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:21.416176 1465727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420716 1465727 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.420816 1465727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:21.426032 1465727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:21.434979 1465727 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:21.439153 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:21.444648 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:21.450243 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:21.455489 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:21.460794 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:21.466219 1465727 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:21.471530 1465727 kubeadm.go:404] StartCluster: {Name:old-k8s-version-711547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-711547 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:21.471628 1465727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:21.471677 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:21.508722 1465727 cri.go:89] found id: ""
	I0131 03:19:21.508795 1465727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:21.517913 1465727 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:21.517943 1465727 kubeadm.go:636] restartCluster start
	I0131 03:19:21.518012 1465727 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:21.526290 1465727 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:21.527501 1465727 kubeconfig.go:92] found "old-k8s-version-711547" server: "https://192.168.50.63:8443"
	I0131 03:19:21.530259 1465727 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:21.538442 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:21.538528 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:21.548956 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.038468 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.038574 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.049394 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:22.538605 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:22.538701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:22.549651 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:23.038857 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.038988 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.050489 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:20.478788 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479296 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:20.479341 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:20.479262 1467325 retry.go:31] will retry after 1.385706797s: waiting for machine to come up
	I0131 03:19:21.867040 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867480 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:21.867506 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:21.867432 1467325 retry.go:31] will retry after 2.023566474s: waiting for machine to come up
	I0131 03:19:23.893713 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894188 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:23.894222 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:23.894119 1467325 retry.go:31] will retry after 2.335724195s: waiting for machine to come up
	I0131 03:19:23.539335 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:23.539444 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:23.550866 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.038592 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.038710 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.050077 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:24.538579 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:24.538661 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:24.549810 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.039420 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.039512 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.051101 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:25.538549 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:25.538654 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:25.552821 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.039279 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.039395 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.050150 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.538699 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:26.538841 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:26.553086 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.038585 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.038701 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.050685 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:27.539261 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:27.539392 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:27.550316 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:28.039448 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.039564 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.051196 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:26.231540 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231945 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:26.231970 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:26.231895 1467325 retry.go:31] will retry after 2.956919877s: waiting for machine to come up
	I0131 03:19:29.190010 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190513 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | unable to find current IP address of domain default-k8s-diff-port-873005 in network mk-default-k8s-diff-port-873005
	I0131 03:19:29.190549 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | I0131 03:19:29.190433 1467325 retry.go:31] will retry after 3.186526476s: waiting for machine to come up
	I0131 03:19:28.539230 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:28.539326 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:28.551055 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.038675 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.038783 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.049926 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:29.538507 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:29.538606 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:29.549309 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.039257 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.039359 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.050555 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:30.539147 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:30.539286 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:30.550179 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.038685 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.038809 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.050144 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.538939 1465727 api_server.go:166] Checking apiserver status ...
	I0131 03:19:31.539024 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:31.549604 1465727 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:31.549647 1465727 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:31.549660 1465727 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:31.549678 1465727 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:31.549770 1465727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:31.587751 1465727 cri.go:89] found id: ""
	I0131 03:19:31.587822 1465727 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:31.603397 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:31.612195 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:31.612263 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620959 1465727 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:31.620984 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:31.737416 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.645078 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.861238 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:32.944897 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:33.048396 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:33.048496 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:33.587337 1466459 start.go:369] acquired machines lock for "embed-certs-958254" in 2m30.118621848s
	I0131 03:19:33.587411 1466459 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:33.587444 1466459 fix.go:54] fixHost starting: 
	I0131 03:19:33.587872 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:33.587906 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:33.608024 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0131 03:19:33.608545 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:33.609015 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:19:33.609048 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:33.609468 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:33.609659 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:33.609796 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:19:33.611524 1466459 fix.go:102] recreateIfNeeded on embed-certs-958254: state=Stopped err=<nil>
	I0131 03:19:33.611572 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	W0131 03:19:33.611752 1466459 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:33.613613 1466459 out.go:177] * Restarting existing kvm2 VM for "embed-certs-958254" ...
	I0131 03:19:32.379632 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380099 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.380134 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Found IP for machine: 192.168.61.123
	I0131 03:19:32.380150 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserving static IP address...
	I0131 03:19:32.380555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Reserved static IP address: 192.168.61.123
	I0131 03:19:32.380594 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.380610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Waiting for SSH to be available...
	I0131 03:19:32.380647 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | skip adding static IP to network mk-default-k8s-diff-port-873005 - found existing host DHCP lease matching {name: "default-k8s-diff-port-873005", mac: "52:54:00:b6:ab:c7", ip: "192.168.61.123"}
	I0131 03:19:32.380661 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Getting to WaitForSSH function...
	I0131 03:19:32.382401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.382787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.382872 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH client type: external
	I0131 03:19:32.382903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa (-rw-------)
	I0131 03:19:32.382943 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:32.382959 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | About to run SSH command:
	I0131 03:19:32.382984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | exit 0
	I0131 03:19:32.470672 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:32.471097 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetConfigRaw
	I0131 03:19:32.471768 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.474225 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474597 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.474631 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.474948 1465898 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/config.json ...
	I0131 03:19:32.475139 1465898 machine.go:88] provisioning docker machine ...
	I0131 03:19:32.475158 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:32.475374 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475542 1465898 buildroot.go:166] provisioning hostname "default-k8s-diff-port-873005"
	I0131 03:19:32.475564 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.475720 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.478005 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478356 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.478391 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.478466 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.478693 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.478871 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.479083 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.479287 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.479622 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.479636 1465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-873005 && echo "default-k8s-diff-port-873005" | sudo tee /etc/hostname
	I0131 03:19:32.608136 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-873005
	
	I0131 03:19:32.608173 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.611145 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611544 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.611580 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.611716 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.611937 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612154 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.612354 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.612511 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:32.612878 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:32.612903 1465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-873005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-873005/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-873005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:32.734103 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:32.734144 1465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:32.734176 1465898 buildroot.go:174] setting up certificates
	I0131 03:19:32.734196 1465898 provision.go:83] configureAuth start
	I0131 03:19:32.734209 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetMachineName
	I0131 03:19:32.734550 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:32.737468 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.737810 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.737844 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.738096 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.740787 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741199 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.741233 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.741374 1465898 provision.go:138] copyHostCerts
	I0131 03:19:32.741429 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:32.741442 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:32.741498 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:32.741632 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:32.741642 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:32.741665 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:32.741716 1465898 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:32.741722 1465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:32.741738 1465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:32.741784 1465898 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-873005 san=[192.168.61.123 192.168.61.123 localhost 127.0.0.1 minikube default-k8s-diff-port-873005]
	I0131 03:19:32.850632 1465898 provision.go:172] copyRemoteCerts
	I0131 03:19:32.850695 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:32.850721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:32.853291 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853614 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:32.853651 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:32.853828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:32.854016 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:32.854194 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:32.854361 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:32.943528 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0131 03:19:32.970345 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:32.995909 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:33.024408 1465898 provision.go:86] duration metric: configureAuth took 290.196472ms
	I0131 03:19:33.024438 1465898 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:33.024661 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:33.024755 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.027751 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.028312 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.028469 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.028719 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.028961 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.029180 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.029424 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.029790 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.029810 1465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:33.350806 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:33.350839 1465898 machine.go:91] provisioned docker machine in 875.685131ms
	I0131 03:19:33.350855 1465898 start.go:300] post-start starting for "default-k8s-diff-port-873005" (driver="kvm2")
	I0131 03:19:33.350871 1465898 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:33.350895 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.351287 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:33.351334 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.353986 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354419 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.354443 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.354689 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.354898 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.355046 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.355221 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.439603 1465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:33.443119 1465898 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:33.443145 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:33.443222 1465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:33.443320 1465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:33.443430 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:33.451425 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:33.471270 1465898 start.go:303] post-start completed in 120.397142ms
	I0131 03:19:33.471302 1465898 fix.go:56] fixHost completed within 19.459960903s
	I0131 03:19:33.471326 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.473691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474060 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.474091 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.474244 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.474430 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474627 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.474753 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.474918 1465898 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:33.475237 1465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0131 03:19:33.475249 1465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:33.587174 1465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671173.532604525
	
	I0131 03:19:33.587202 1465898 fix.go:206] guest clock: 1706671173.532604525
	I0131 03:19:33.587217 1465898 fix.go:219] Guest: 2024-01-31 03:19:33.532604525 +0000 UTC Remote: 2024-01-31 03:19:33.47130747 +0000 UTC m=+294.038044427 (delta=61.297055ms)
	I0131 03:19:33.587243 1465898 fix.go:190] guest clock delta is within tolerance: 61.297055ms
	I0131 03:19:33.587251 1465898 start.go:83] releasing machines lock for "default-k8s-diff-port-873005", held for 19.57594393s
	I0131 03:19:33.587282 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.587557 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:33.590395 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590776 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.590809 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.590995 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591623 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591822 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:19:33.591926 1465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:33.591999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.592054 1465898 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:33.592078 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:19:33.594999 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595446 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.595477 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595644 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.595805 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.595879 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596082 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596258 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.596286 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:33.596380 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:19:33.596390 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:33.596579 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:19:33.596760 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:19:33.596951 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:19:33.715222 1465898 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:33.721794 1465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:33.871506 1465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:33.877488 1465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:33.877596 1465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:33.896121 1465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:33.896156 1465898 start.go:475] detecting cgroup driver to use...
	I0131 03:19:33.896245 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:33.912876 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:33.927661 1465898 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:33.927743 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:33.944332 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:33.960438 1465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:34.086879 1465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:34.218866 1465898 docker.go:233] disabling docker service ...
	I0131 03:19:34.218946 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:34.233585 1465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:34.246358 1465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:34.387480 1465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:34.513082 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:34.526532 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:34.544801 1465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:34.544902 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.558806 1465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:34.558905 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.569251 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.582784 1465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:34.595979 1465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:34.608318 1465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:34.616417 1465898 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:34.616494 1465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:34.629018 1465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:34.638513 1465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:34.753541 1465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:34.963779 1465898 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:34.963868 1465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:34.969755 1465898 start.go:543] Will wait 60s for crictl version
	I0131 03:19:34.969826 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:19:34.974176 1465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:35.020759 1465898 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:35.020850 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.072999 1465898 ssh_runner.go:195] Run: crio --version
	I0131 03:19:35.143712 1465898 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:33.615078 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Start
	I0131 03:19:33.615258 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring networks are active...
	I0131 03:19:33.616056 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network default is active
	I0131 03:19:33.616376 1466459 main.go:141] libmachine: (embed-certs-958254) Ensuring network mk-embed-certs-958254 is active
	I0131 03:19:33.616770 1466459 main.go:141] libmachine: (embed-certs-958254) Getting domain xml...
	I0131 03:19:33.617424 1466459 main.go:141] libmachine: (embed-certs-958254) Creating domain...
	I0131 03:19:35.016562 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting to get IP...
	I0131 03:19:35.017711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.018134 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.018234 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.018115 1467469 retry.go:31] will retry after 281.115622ms: waiting for machine to come up
	I0131 03:19:35.300987 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.301642 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.301672 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.301583 1467469 retry.go:31] will retry after 382.696531ms: waiting for machine to come up
	I0131 03:19:35.686371 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:35.686945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:35.686983 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:35.686881 1467469 retry.go:31] will retry after 467.397008ms: waiting for machine to come up
	I0131 03:19:36.156392 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.157129 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.157161 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.157087 1467469 retry.go:31] will retry after 588.034996ms: waiting for machine to come up
	I0131 03:19:36.747103 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:36.747739 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:36.747771 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:36.747711 1467469 retry.go:31] will retry after 570.532804ms: waiting for machine to come up
	I0131 03:19:37.319694 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.320231 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.320264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.320206 1467469 retry.go:31] will retry after 572.77687ms: waiting for machine to come up
	I0131 03:19:37.895308 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:37.895814 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:37.895844 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:37.895769 1467469 retry.go:31] will retry after 833.23491ms: waiting for machine to come up
	I0131 03:19:33.549149 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.048799 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:34.549314 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.048885 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:35.075463 1465727 api_server.go:72] duration metric: took 2.027068042s to wait for apiserver process to appear ...
	I0131 03:19:35.075490 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:35.075525 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:35.145198 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetIP
	I0131 03:19:35.148610 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149052 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:19:35.149087 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:19:35.149329 1465898 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:35.153543 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:35.169144 1465898 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:35.169226 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:35.217572 1465898 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:35.217675 1465898 ssh_runner.go:195] Run: which lz4
	I0131 03:19:35.221897 1465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:35.226333 1465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:35.226373 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:36.870773 1465898 crio.go:444] Took 1.648904 seconds to copy over tarball
	I0131 03:19:36.870903 1465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:38.730812 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:38.731317 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:38.731367 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:38.731283 1467469 retry.go:31] will retry after 1.083923411s: waiting for machine to come up
	I0131 03:19:39.816550 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:39.817000 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:39.817035 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:39.816957 1467469 retry.go:31] will retry after 1.414569505s: waiting for machine to come up
	I0131 03:19:41.232711 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:41.233072 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:41.233104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:41.233020 1467469 retry.go:31] will retry after 1.829994317s: waiting for machine to come up
	I0131 03:19:43.065343 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:43.065823 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:43.065857 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:43.065760 1467469 retry.go:31] will retry after 2.506323142s: waiting for machine to come up
	I0131 03:19:40.076389 1465727 api_server.go:269] stopped: https://192.168.50.63:8443/healthz: Get "https://192.168.50.63:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0131 03:19:40.076448 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.717017 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.717059 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:41.717079 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:41.738258 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:41.738291 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:42.075699 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.730135 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.730181 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:42.730203 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:42.805335 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:42.805375 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.076421 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.082935 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0131 03:19:43.082971 1465727 api_server.go:103] status: https://192.168.50.63:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0131 03:19:43.575664 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:19:43.582814 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:19:43.593073 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:19:43.593113 1465727 api_server.go:131] duration metric: took 8.517613988s to wait for apiserver health ...
	I0131 03:19:43.593127 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:19:43.593144 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:43.594982 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:19:39.815034 1465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944091458s)
	I0131 03:19:39.815074 1465898 crio.go:451] Took 2.944224 seconds to extract the tarball
	I0131 03:19:39.815090 1465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:19:39.855696 1465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:39.904386 1465898 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:19:39.904418 1465898 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:19:39.904509 1465898 ssh_runner.go:195] Run: crio config
	I0131 03:19:39.972894 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:19:39.972928 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:19:39.972957 1465898 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:19:39.972985 1465898 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-873005 NodeName:default-k8s-diff-port-873005 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:19:39.973201 1465898 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-873005"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:19:39.973298 1465898 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-873005 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0131 03:19:39.973365 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:19:39.982097 1465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:19:39.982206 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:19:39.993781 1465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0131 03:19:40.012618 1465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:19:40.031973 1465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0131 03:19:40.049646 1465898 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0131 03:19:40.053498 1465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:40.066873 1465898 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005 for IP: 192.168.61.123
	I0131 03:19:40.066914 1465898 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:19:40.067198 1465898 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:19:40.067254 1465898 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:19:40.067376 1465898 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/client.key
	I0131 03:19:40.067474 1465898 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key.596e38b1
	I0131 03:19:40.067535 1465898 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key
	I0131 03:19:40.067748 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:19:40.067797 1465898 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:19:40.067813 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:19:40.067850 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:19:40.067885 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:19:40.067924 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:19:40.067978 1465898 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:40.068687 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:19:40.094577 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:19:40.117833 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:19:40.140782 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/default-k8s-diff-port-873005/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:19:40.163701 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:19:40.187177 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:19:40.218570 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:19:40.246136 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:19:40.275403 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:19:40.302040 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:19:40.327371 1465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:19:40.352927 1465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:19:40.371690 1465898 ssh_runner.go:195] Run: openssl version
	I0131 03:19:40.377700 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:19:40.387507 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393609 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.393701 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:19:40.401095 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:19:40.415647 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:19:40.426902 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431720 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.431803 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:19:40.437347 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:19:40.446986 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:19:40.457779 1465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462716 1465898 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.462790 1465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:19:40.468321 1465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:19:40.481055 1465898 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:19:40.486096 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:19:40.492538 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:19:40.498664 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:19:40.504630 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:19:40.510588 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:19:40.516480 1465898 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:19:40.524391 1465898 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-873005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-873005 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:19:40.524509 1465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:19:40.524570 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:40.575788 1465898 cri.go:89] found id: ""
	I0131 03:19:40.575887 1465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:19:40.585291 1465898 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:19:40.585320 1465898 kubeadm.go:636] restartCluster start
	I0131 03:19:40.585383 1465898 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:19:40.594593 1465898 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:40.596215 1465898 kubeconfig.go:92] found "default-k8s-diff-port-873005" server: "https://192.168.61.123:8444"
	I0131 03:19:40.600123 1465898 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:19:40.609224 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:40.609289 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:40.620769 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.110331 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.110450 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.121982 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:41.609492 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:41.609592 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:41.621972 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.109411 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.109515 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.124820 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:42.609296 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:42.609412 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:42.621029 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.109511 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.109606 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.124911 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:43.609397 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:43.609514 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:43.626240 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:44.109323 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.109419 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.124549 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.573357 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:45.573785 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:45.573821 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:45.573735 1467469 retry.go:31] will retry after 3.608126135s: waiting for machine to come up
	I0131 03:19:43.596636 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:19:43.613239 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:19:43.655123 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:19:43.665773 1465727 system_pods.go:59] 7 kube-system pods found
	I0131 03:19:43.665819 1465727 system_pods.go:61] "coredns-5644d7b6d9-2g2fj" [fc3c718c-696b-4a57-83e2-d9ee3bed6923] Running
	I0131 03:19:43.665844 1465727 system_pods.go:61] "etcd-old-k8s-version-711547" [4c5a2527-ffa7-4771-8380-56556030ad90] Running
	I0131 03:19:43.665852 1465727 system_pods.go:61] "kube-apiserver-old-k8s-version-711547" [df7cbcbe-1aeb-4986-82e5-70d495b2579d] Running
	I0131 03:19:43.665859 1465727 system_pods.go:61] "kube-controller-manager-old-k8s-version-711547" [21cccd1c-4b8e-4d4f-956d-872aa474e9d8] Running
	I0131 03:19:43.665868 1465727 system_pods.go:61] "kube-proxy-7dtkz" [aac05831-252e-486d-8bc8-772731374a89] Running
	I0131 03:19:43.665875 1465727 system_pods.go:61] "kube-scheduler-old-k8s-version-711547" [da2f43ad-bbc3-44fb-a608-08c2ae08818f] Running
	I0131 03:19:43.665885 1465727 system_pods.go:61] "storage-provisioner" [f16355c3-b573-40f2-ad98-32c077f04e46] Running
	I0131 03:19:43.665894 1465727 system_pods.go:74] duration metric: took 10.742015ms to wait for pod list to return data ...
	I0131 03:19:43.665915 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:19:43.670287 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:19:43.670328 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:19:43.670343 1465727 node_conditions.go:105] duration metric: took 4.422551ms to run NodePressure ...
	I0131 03:19:43.670366 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:43.947579 1465727 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:19:43.952499 1465727 retry.go:31] will retry after 170.414704ms: kubelet not initialised
	I0131 03:19:44.130420 1465727 retry.go:31] will retry after 504.822426ms: kubelet not initialised
	I0131 03:19:44.640095 1465727 retry.go:31] will retry after 519.270243ms: kubelet not initialised
	I0131 03:19:45.164417 1465727 retry.go:31] will retry after 730.256814ms: kubelet not initialised
	I0131 03:19:45.903026 1465727 retry.go:31] will retry after 853.098887ms: kubelet not initialised
	I0131 03:19:46.764300 1465727 retry.go:31] will retry after 2.467014704s: kubelet not initialised
	I0131 03:19:44.609572 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:44.609682 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:44.625242 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.109761 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.109894 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.121467 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:45.610114 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:45.610210 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:45.621421 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.109926 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.109996 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.121003 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:46.609509 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:46.609649 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:46.620779 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.110208 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.110316 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.122909 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:47.609355 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:47.609474 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:47.620375 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.109993 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.110131 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.123531 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:48.610170 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:48.610266 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:48.620964 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.109874 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.109997 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.121344 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:49.183666 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:49.184174 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | unable to find current IP address of domain embed-certs-958254 in network mk-embed-certs-958254
	I0131 03:19:49.184209 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | I0131 03:19:49.184103 1467469 retry.go:31] will retry after 3.277150176s: waiting for machine to come up
	I0131 03:19:52.465465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.465830 1466459 main.go:141] libmachine: (embed-certs-958254) Found IP for machine: 192.168.39.232
	I0131 03:19:52.465849 1466459 main.go:141] libmachine: (embed-certs-958254) Reserving static IP address...
	I0131 03:19:52.465863 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has current primary IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.466264 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.466307 1466459 main.go:141] libmachine: (embed-certs-958254) Reserved static IP address: 192.168.39.232
	I0131 03:19:52.466331 1466459 main.go:141] libmachine: (embed-certs-958254) Waiting for SSH to be available...
	I0131 03:19:52.466352 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | skip adding static IP to network mk-embed-certs-958254 - found existing host DHCP lease matching {name: "embed-certs-958254", mac: "52:54:00:13:06:de", ip: "192.168.39.232"}
	I0131 03:19:52.466368 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Getting to WaitForSSH function...
	I0131 03:19:52.468562 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.468867 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.468900 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.469041 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH client type: external
	I0131 03:19:52.469074 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa (-rw-------)
	I0131 03:19:52.469117 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:19:52.469137 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | About to run SSH command:
	I0131 03:19:52.469151 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | exit 0
	I0131 03:19:52.554397 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | SSH cmd err, output: <nil>: 
	I0131 03:19:52.554838 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetConfigRaw
	I0131 03:19:52.555611 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.558511 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.558906 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.558945 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.559188 1466459 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/config.json ...
	I0131 03:19:52.559400 1466459 machine.go:88] provisioning docker machine ...
	I0131 03:19:52.559421 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:52.559645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559816 1466459 buildroot.go:166] provisioning hostname "embed-certs-958254"
	I0131 03:19:52.559831 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.559994 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.562543 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.562901 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.562933 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.563085 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.563276 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563436 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.563628 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.563800 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.564147 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.564161 1466459 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-958254 && echo "embed-certs-958254" | sudo tee /etc/hostname
	I0131 03:19:52.688777 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-958254
	
	I0131 03:19:52.688817 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.692015 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.692497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.692797 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.693013 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693184 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.693388 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.693579 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:52.694043 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:52.694071 1466459 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-958254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-958254/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-958254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:19:52.821443 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:19:52.821489 1466459 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:19:52.821543 1466459 buildroot.go:174] setting up certificates
	I0131 03:19:52.821567 1466459 provision.go:83] configureAuth start
	I0131 03:19:52.821583 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetMachineName
	I0131 03:19:52.821930 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:52.825108 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825496 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.825527 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.825756 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.828269 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828621 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.828651 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.828893 1466459 provision.go:138] copyHostCerts
	I0131 03:19:52.828964 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:19:52.828987 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:19:52.829069 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:19:52.829194 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:19:52.829209 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:19:52.829243 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:19:52.829323 1466459 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:19:52.829335 1466459 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:19:52.829366 1466459 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:19:52.829466 1466459 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.embed-certs-958254 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube embed-certs-958254]
	I0131 03:19:52.931760 1466459 provision.go:172] copyRemoteCerts
	I0131 03:19:52.931825 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:19:52.931856 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:52.935111 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935440 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:52.935465 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:52.935721 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:52.935915 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:52.936117 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:52.936273 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.024352 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:19:53.051185 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:19:53.076996 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0131 03:19:53.097919 1466459 provision.go:86] duration metric: configureAuth took 276.335726ms
	I0131 03:19:53.097951 1466459 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:19:53.098189 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:19:53.098319 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.101687 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102128 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.102178 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.102334 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.102610 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.102877 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.103072 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.103309 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.103829 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.103860 1466459 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:19:49.236547 1465727 retry.go:31] will retry after 1.793227218s: kubelet not initialised
	I0131 03:19:51.035248 1465727 retry.go:31] will retry after 2.779615352s: kubelet not initialised
	I0131 03:19:53.664145 1465496 start.go:369] acquired machines lock for "no-preload-625812" in 55.738696582s
	I0131 03:19:53.664205 1465496 start.go:96] Skipping create...Using existing machine configuration
	I0131 03:19:53.664216 1465496 fix.go:54] fixHost starting: 
	I0131 03:19:53.664629 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:19:53.664680 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:19:53.683147 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I0131 03:19:53.684034 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:19:53.684629 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:19:53.684660 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:19:53.685055 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:19:53.685266 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:19:53.685468 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:19:53.687260 1465496 fix.go:102] recreateIfNeeded on no-preload-625812: state=Stopped err=<nil>
	I0131 03:19:53.687288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	W0131 03:19:53.687444 1465496 fix.go:128] unexpected machine state, will restart: <nil>
	I0131 03:19:53.689464 1465496 out.go:177] * Restarting existing kvm2 VM for "no-preload-625812" ...
	I0131 03:19:49.610240 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:49.610357 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:49.621551 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.110145 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.110248 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.121902 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.609752 1465898 api_server.go:166] Checking apiserver status ...
	I0131 03:19:50.609896 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:19:50.620729 1465898 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:50.620760 1465898 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:19:50.620769 1465898 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:19:50.620781 1465898 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:19:50.620842 1465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:19:50.655962 1465898 cri.go:89] found id: ""
	I0131 03:19:50.656034 1465898 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:19:50.670196 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:19:50.678438 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:19:50.678512 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686353 1465898 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:19:50.686377 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:50.787983 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.766656 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:51.947670 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.020841 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:19:52.087869 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:19:52.087974 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:52.588285 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.088598 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.588683 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.088222 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:53.416070 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:19:53.416102 1466459 machine.go:91] provisioned docker machine in 856.686657ms
	I0131 03:19:53.416115 1466459 start.go:300] post-start starting for "embed-certs-958254" (driver="kvm2")
	I0131 03:19:53.416130 1466459 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:19:53.416152 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.416515 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:19:53.416550 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.419110 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419497 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.419525 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.419836 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.420057 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.420223 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.420376 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.503785 1466459 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:19:53.507858 1466459 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:19:53.507890 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:19:53.508021 1466459 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:19:53.508094 1466459 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:19:53.508184 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:19:53.515845 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:19:53.537459 1466459 start.go:303] post-start completed in 121.324433ms
	I0131 03:19:53.537495 1466459 fix.go:56] fixHost completed within 19.950074846s
	I0131 03:19:53.537526 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.540719 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541097 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.541138 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.541371 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.541590 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541707 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.541924 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.542116 1466459 main.go:141] libmachine: Using SSH client type: native
	I0131 03:19:53.542438 1466459 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0131 03:19:53.542452 1466459 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:19:53.663950 1466459 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671193.614107467
	
	I0131 03:19:53.663981 1466459 fix.go:206] guest clock: 1706671193.614107467
	I0131 03:19:53.663991 1466459 fix.go:219] Guest: 2024-01-31 03:19:53.614107467 +0000 UTC Remote: 2024-01-31 03:19:53.537501013 +0000 UTC m=+170.232508862 (delta=76.606454ms)
	I0131 03:19:53.664051 1466459 fix.go:190] guest clock delta is within tolerance: 76.606454ms
	I0131 03:19:53.664061 1466459 start.go:83] releasing machines lock for "embed-certs-958254", held for 20.076673524s
	I0131 03:19:53.664095 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.664469 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:53.667439 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668024 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.668104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.668219 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.668884 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669087 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:19:53.669227 1466459 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:19:53.669314 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.669346 1466459 ssh_runner.go:195] Run: cat /version.json
	I0131 03:19:53.669377 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:19:53.673093 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673248 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673420 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673194 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673517 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673557 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:53.673580 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:53.673667 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:19:53.673734 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.673969 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.673982 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:19:53.674173 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:19:53.674180 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.674312 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:19:53.799336 1466459 ssh_runner.go:195] Run: systemctl --version
	I0131 03:19:53.805162 1466459 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:19:53.952587 1466459 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:19:53.958419 1466459 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:19:53.958530 1466459 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:19:53.971832 1466459 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:19:53.971866 1466459 start.go:475] detecting cgroup driver to use...
	I0131 03:19:53.971946 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:19:53.988375 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:19:54.000875 1466459 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:19:54.000948 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:19:54.017770 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:19:54.034214 1466459 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:19:54.154352 1466459 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:19:54.314926 1466459 docker.go:233] disabling docker service ...
	I0131 03:19:54.315012 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:19:54.330557 1466459 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:19:54.344595 1466459 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:19:54.468196 1466459 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:19:54.630438 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:19:54.645472 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:19:54.665340 1466459 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:19:54.665427 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.677758 1466459 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:19:54.677843 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.690405 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.702616 1466459 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:19:54.712654 1466459 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:19:54.723746 1466459 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:19:54.735284 1466459 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:19:54.735358 1466459 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:19:54.751082 1466459 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:19:54.762460 1466459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:19:54.916842 1466459 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:19:55.105770 1466459 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:19:55.105862 1466459 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:19:55.111870 1466459 start.go:543] Will wait 60s for crictl version
	I0131 03:19:55.112014 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:19:55.116743 1466459 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:19:55.165427 1466459 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:19:55.165526 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.223389 1466459 ssh_runner.go:195] Run: crio --version
	I0131 03:19:55.272307 1466459 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0131 03:19:53.690828 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Start
	I0131 03:19:53.691030 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring networks are active...
	I0131 03:19:53.691801 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network default is active
	I0131 03:19:53.692297 1465496 main.go:141] libmachine: (no-preload-625812) Ensuring network mk-no-preload-625812 is active
	I0131 03:19:53.693485 1465496 main.go:141] libmachine: (no-preload-625812) Getting domain xml...
	I0131 03:19:53.694618 1465496 main.go:141] libmachine: (no-preload-625812) Creating domain...
	I0131 03:19:55.042532 1465496 main.go:141] libmachine: (no-preload-625812) Waiting to get IP...
	I0131 03:19:55.043607 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.044041 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.044180 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.044045 1467687 retry.go:31] will retry after 230.922351ms: waiting for machine to come up
	I0131 03:19:55.276816 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.277402 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.277435 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.277367 1467687 retry.go:31] will retry after 370.068692ms: waiting for machine to come up
	I0131 03:19:55.274017 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetIP
	I0131 03:19:55.277592 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278017 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:19:55.278056 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:19:55.278356 1466459 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0131 03:19:55.283769 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:19:55.298107 1466459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 03:19:55.298188 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:19:55.338433 1466459 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0131 03:19:55.338558 1466459 ssh_runner.go:195] Run: which lz4
	I0131 03:19:55.342771 1466459 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0131 03:19:55.347160 1466459 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0131 03:19:55.347206 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0131 03:19:56.991725 1466459 crio.go:444] Took 1.648994 seconds to copy over tarball
	I0131 03:19:56.991821 1466459 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0131 03:19:53.823139 1465727 retry.go:31] will retry after 3.780431021s: kubelet not initialised
	I0131 03:19:57.615679 1465727 retry.go:31] will retry after 12.134340719s: kubelet not initialised
	I0131 03:19:54.588794 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:19:54.623052 1465898 api_server.go:72] duration metric: took 2.535180605s to wait for apiserver process to appear ...
	I0131 03:19:54.623092 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:19:54.623141 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:55.649133 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:55.649797 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:55.649838 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:55.649768 1467687 retry.go:31] will retry after 421.622241ms: waiting for machine to come up
	I0131 03:19:56.073712 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.074467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.074513 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.074269 1467687 retry.go:31] will retry after 587.05453ms: waiting for machine to come up
	I0131 03:19:56.663210 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:56.663749 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:56.663790 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:56.663678 1467687 retry.go:31] will retry after 620.56275ms: waiting for machine to come up
	I0131 03:19:57.286207 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.286688 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.286737 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.286647 1467687 retry.go:31] will retry after 674.764903ms: waiting for machine to come up
	I0131 03:19:57.963069 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:57.963573 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:57.963599 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:57.963520 1467687 retry.go:31] will retry after 1.10400582s: waiting for machine to come up
	I0131 03:19:59.068964 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:19:59.069440 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:19:59.069467 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:19:59.069383 1467687 retry.go:31] will retry after 1.48867494s: waiting for machine to come up
	I0131 03:20:00.084963 1466459 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093104085s)
	I0131 03:20:00.085000 1466459 crio.go:451] Took 3.093238 seconds to extract the tarball
	I0131 03:20:00.085014 1466459 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0131 03:20:00.153533 1466459 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:00.203133 1466459 crio.go:496] all images are preloaded for cri-o runtime.
	I0131 03:20:00.203215 1466459 cache_images.go:84] Images are preloaded, skipping loading
	I0131 03:20:00.203308 1466459 ssh_runner.go:195] Run: crio config
	I0131 03:20:00.266864 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:00.266898 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:00.266927 1466459 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:00.266955 1466459 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.232 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-958254 NodeName:embed-certs-958254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:00.267148 1466459 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-958254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:00.267253 1466459 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-958254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:00.267331 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0131 03:20:00.279543 1466459 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:00.279637 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:00.292463 1466459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0131 03:20:00.313102 1466459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0131 03:20:00.329962 1466459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0131 03:20:00.351487 1466459 ssh_runner.go:195] Run: grep 192.168.39.232	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:00.355881 1466459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:00.368624 1466459 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254 for IP: 192.168.39.232
	I0131 03:20:00.368668 1466459 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:00.368836 1466459 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:00.368890 1466459 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:00.368997 1466459 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/client.key
	I0131 03:20:00.369071 1466459 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key.ca7bc7e0
	I0131 03:20:00.369108 1466459 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key
	I0131 03:20:00.369230 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:00.369257 1466459 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:00.369268 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:00.369294 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:00.369317 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:00.369341 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:00.369379 1466459 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:00.370093 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:00.392771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0131 03:20:00.416504 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:00.441357 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/embed-certs-958254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0131 03:20:00.469603 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:00.493533 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:00.521871 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:00.547738 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:00.572771 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:00.596263 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:00.618766 1466459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:00.642074 1466459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:00.657634 1466459 ssh_runner.go:195] Run: openssl version
	I0131 03:20:00.662869 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:00.673704 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678201 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.678299 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:00.683872 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:00.694619 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:00.705736 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710374 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.710451 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:00.715928 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:00.727620 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:00.738237 1466459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742428 1466459 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.742525 1466459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:00.747812 1466459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:00.757953 1466459 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:00.762418 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:00.768325 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:00.773824 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:00.779967 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:00.785943 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:00.791907 1466459 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:00.797790 1466459 kubeadm.go:404] StartCluster: {Name:embed-certs-958254 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-958254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:00.797882 1466459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:00.797989 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:00.843199 1466459 cri.go:89] found id: ""
	I0131 03:20:00.843289 1466459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:00.853963 1466459 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:00.853994 1466459 kubeadm.go:636] restartCluster start
	I0131 03:20:00.854060 1466459 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:00.864776 1466459 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:00.866019 1466459 kubeconfig.go:92] found "embed-certs-958254" server: "https://192.168.39.232:8443"
	I0131 03:20:00.868584 1466459 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:00.878689 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:00.878765 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:00.891577 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.378755 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.378849 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.392040 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:01.879661 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:01.879770 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:01.894998 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.379551 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.379671 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.393008 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:02.879560 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:02.879680 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:02.896699 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:19:59.557240 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.557285 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.557308 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.612724 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:19:59.612775 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:19:59.624061 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:19:59.721181 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:19:59.721236 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.123708 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.134187 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.134229 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:00.624066 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:00.630341 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:00.630374 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.123728 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.131385 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.131479 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:01.623667 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:01.629384 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:01.629431 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.123701 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.129220 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.129272 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:02.623693 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:02.629228 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:02.629271 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.123778 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.132555 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:03.132617 1465898 api_server.go:103] status: https://192.168.61.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:03.623244 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:20:03.630694 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:20:03.649732 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:03.649778 1465898 api_server.go:131] duration metric: took 9.02667615s to wait for apiserver health ...
	I0131 03:20:03.649792 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:20:03.649802 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:03.651944 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:03.653645 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:03.683325 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:03.719778 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:03.745975 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:03.746029 1465898 system_pods.go:61] "coredns-5dd5756b68-xlq7n" [0b9d620d-d79f-474e-aeb7-1357daaaa849] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:03.746044 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [2f2f474f-bee9-4df2-a5f6-2525bc15c99a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:03.746056 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [ba87e90b-b01b-4aa7-a4da-68d8e5c39020] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:03.746088 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [a96ebed4-d6f6-47b7-a8f6-b80acc9cde60] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:03.746111 1465898 system_pods.go:61] "kube-proxy-trv94" [c085dfdb-0b75-40c1-b331-ef687888090e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:03.746121 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [b7adce77-8007-4316-9a2a-bdcec260840c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:03.746141 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-fct8b" [b1d9d7e3-98c4-4b7a-acd1-d88fe109ef33] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:03.746155 1465898 system_pods.go:61] "storage-provisioner" [be762288-ff88-44e7-938d-9ecc8a977526] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:03.746169 1465898 system_pods.go:74] duration metric: took 26.36215ms to wait for pod list to return data ...
	I0131 03:20:03.746183 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:03.755320 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:03.755365 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:03.755384 1465898 node_conditions.go:105] duration metric: took 9.194114ms to run NodePressure ...
	I0131 03:20:03.755413 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:04.124222 1465898 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130888 1465898 kubeadm.go:787] kubelet initialised
	I0131 03:20:04.130921 1465898 kubeadm.go:788] duration metric: took 6.663771ms waiting for restarted kubelet to initialise ...
	I0131 03:20:04.130932 1465898 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:04.141883 1465898 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:00.559917 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:00.715628 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:00.715677 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:00.560506 1467687 retry.go:31] will retry after 1.67725835s: waiting for machine to come up
	I0131 03:20:02.240289 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:02.240826 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:02.240863 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:02.240781 1467687 retry.go:31] will retry after 2.023057937s: waiting for machine to come up
	I0131 03:20:04.266202 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:04.266733 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:04.266825 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:04.266715 1467687 retry.go:31] will retry after 2.664323304s: waiting for machine to come up
	I0131 03:20:03.379260 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.379366 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.395063 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:03.879206 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:03.879327 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:03.896172 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.378721 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.378829 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.395328 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:04.878823 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:04.878944 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:04.891061 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.379692 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.379795 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.395247 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:05.879667 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:05.879811 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:05.894445 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.378974 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.379107 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.391878 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.879343 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:06.879446 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:06.892910 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.379549 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.379647 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.391991 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:07.879610 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:07.879757 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:07.895280 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:06.154196 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:08.664906 1465898 pod_ready.go:102] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:06.932836 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:06.933529 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:06.933574 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:06.933459 1467687 retry.go:31] will retry after 3.065677387s: waiting for machine to come up
	I0131 03:20:10.001330 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:10.002186 1465496 main.go:141] libmachine: (no-preload-625812) DBG | unable to find current IP address of domain no-preload-625812 in network mk-no-preload-625812
	I0131 03:20:10.002216 1465496 main.go:141] libmachine: (no-preload-625812) DBG | I0131 03:20:10.002101 1467687 retry.go:31] will retry after 3.036905728s: waiting for machine to come up
	I0131 03:20:08.379200 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.379310 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.392983 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:08.878955 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:08.879070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:08.890999 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.379530 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.379633 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.391351 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:09.878733 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:09.878814 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:09.891556 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.379098 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.379206 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.391233 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.879672 1466459 api_server.go:166] Checking apiserver status ...
	I0131 03:20:10.879786 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:10.892324 1466459 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:10.892364 1466459 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:10.892377 1466459 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:10.892393 1466459 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:10.892471 1466459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:10.932354 1466459 cri.go:89] found id: ""
	I0131 03:20:10.932425 1466459 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:10.948273 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:10.957212 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:10.957285 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966329 1466459 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:10.966369 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.093326 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.750399 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:11.960956 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.060752 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:12.148963 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:12.149070 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:12.649736 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:13.150030 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:09.755152 1465727 retry.go:31] will retry after 13.770889272s: kubelet not initialised
	I0131 03:20:09.648674 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:09.648703 1465898 pod_ready.go:81] duration metric: took 5.506781604s waiting for pod "coredns-5dd5756b68-xlq7n" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:09.648716 1465898 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656233 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:11.656258 1465898 pod_ready.go:81] duration metric: took 2.007535905s waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:11.656267 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663570 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.663600 1465898 pod_ready.go:81] duration metric: took 1.007324961s waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.663611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668808 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.668832 1465898 pod_ready.go:81] duration metric: took 5.21407ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.668843 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673583 1465898 pod_ready.go:92] pod "kube-proxy-trv94" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.673603 1465898 pod_ready.go:81] duration metric: took 4.754586ms waiting for pod "kube-proxy-trv94" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.673611 1465898 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679052 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:12.679074 1465898 pod_ready.go:81] duration metric: took 5.453847ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:12.679082 1465898 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:13.040911 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.041419 1465496 main.go:141] libmachine: (no-preload-625812) Found IP for machine: 192.168.72.23
	I0131 03:20:13.041451 1465496 main.go:141] libmachine: (no-preload-625812) Reserving static IP address...
	I0131 03:20:13.041471 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has current primary IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.042029 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.042083 1465496 main.go:141] libmachine: (no-preload-625812) Reserved static IP address: 192.168.72.23
	I0131 03:20:13.042105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | skip adding static IP to network mk-no-preload-625812 - found existing host DHCP lease matching {name: "no-preload-625812", mac: "52:54:00:11:1b:69", ip: "192.168.72.23"}
	I0131 03:20:13.042124 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Getting to WaitForSSH function...
	I0131 03:20:13.042143 1465496 main.go:141] libmachine: (no-preload-625812) Waiting for SSH to be available...
	I0131 03:20:13.044263 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044670 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.044707 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.044866 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH client type: external
	I0131 03:20:13.044890 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Using SSH private key: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa (-rw-------)
	I0131 03:20:13.044958 1465496 main.go:141] libmachine: (no-preload-625812) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0131 03:20:13.044979 1465496 main.go:141] libmachine: (no-preload-625812) DBG | About to run SSH command:
	I0131 03:20:13.044993 1465496 main.go:141] libmachine: (no-preload-625812) DBG | exit 0
	I0131 03:20:13.142763 1465496 main.go:141] libmachine: (no-preload-625812) DBG | SSH cmd err, output: <nil>: 
	I0131 03:20:13.143065 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetConfigRaw
	I0131 03:20:13.143763 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.146827 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147322 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.147356 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.147639 1465496 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/config.json ...
	I0131 03:20:13.147843 1465496 machine.go:88] provisioning docker machine ...
	I0131 03:20:13.147866 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:13.148104 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148307 1465496 buildroot.go:166] provisioning hostname "no-preload-625812"
	I0131 03:20:13.148332 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.148510 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.151259 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151623 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.151658 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.151808 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.152034 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152222 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.152415 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.152601 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.152979 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.152996 1465496 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-625812 && echo "no-preload-625812" | sudo tee /etc/hostname
	I0131 03:20:13.302957 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-625812
	
	I0131 03:20:13.302989 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.306162 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306612 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.306656 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.306932 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.307236 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307458 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.307644 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.307891 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.308385 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.308415 1465496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-625812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-625812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-625812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0131 03:20:13.459393 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0131 03:20:13.459432 1465496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18051-1412717/.minikube CaCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18051-1412717/.minikube}
	I0131 03:20:13.459458 1465496 buildroot.go:174] setting up certificates
	I0131 03:20:13.459476 1465496 provision.go:83] configureAuth start
	I0131 03:20:13.459490 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetMachineName
	I0131 03:20:13.459818 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:13.462867 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463301 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.463333 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.463516 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.466156 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466597 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.466629 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.466788 1465496 provision.go:138] copyHostCerts
	I0131 03:20:13.466856 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem, removing ...
	I0131 03:20:13.466870 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem
	I0131 03:20:13.466945 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.pem (1078 bytes)
	I0131 03:20:13.467051 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem, removing ...
	I0131 03:20:13.467061 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem
	I0131 03:20:13.467099 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/cert.pem (1123 bytes)
	I0131 03:20:13.467182 1465496 exec_runner.go:144] found /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem, removing ...
	I0131 03:20:13.467195 1465496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem
	I0131 03:20:13.467226 1465496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18051-1412717/.minikube/key.pem (1679 bytes)
	I0131 03:20:13.467295 1465496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem org=jenkins.no-preload-625812 san=[192.168.72.23 192.168.72.23 localhost 127.0.0.1 minikube no-preload-625812]
	I0131 03:20:13.629331 1465496 provision.go:172] copyRemoteCerts
	I0131 03:20:13.629392 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0131 03:20:13.629420 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.632451 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.632871 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.632903 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.633155 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.633334 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.633502 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.633643 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:13.729991 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0131 03:20:13.755853 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0131 03:20:13.781125 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0131 03:20:13.803778 1465496 provision.go:86] duration metric: configureAuth took 344.286867ms
	I0131 03:20:13.803820 1465496 buildroot.go:189] setting minikube options for container-runtime
	I0131 03:20:13.804030 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:20:13.804138 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:13.807234 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807675 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:13.807736 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:13.807899 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:13.808108 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808307 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:13.808461 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:13.808663 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:13.809033 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:13.809055 1465496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0131 03:20:14.179008 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0131 03:20:14.179039 1465496 machine.go:91] provisioned docker machine in 1.031179568s
	I0131 03:20:14.179055 1465496 start.go:300] post-start starting for "no-preload-625812" (driver="kvm2")
	I0131 03:20:14.179072 1465496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0131 03:20:14.179134 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.179500 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0131 03:20:14.179542 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.183050 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183483 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.183515 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.183726 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.183919 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.184103 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.184299 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.282828 1465496 ssh_runner.go:195] Run: cat /etc/os-release
	I0131 03:20:14.288098 1465496 info.go:137] Remote host: Buildroot 2021.02.12
	I0131 03:20:14.288135 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/addons for local assets ...
	I0131 03:20:14.288242 1465496 filesync.go:126] Scanning /home/jenkins/minikube-integration/18051-1412717/.minikube/files for local assets ...
	I0131 03:20:14.288351 1465496 filesync.go:149] local asset: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem -> 14199762.pem in /etc/ssl/certs
	I0131 03:20:14.288482 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0131 03:20:14.297359 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:14.323339 1465496 start.go:303] post-start completed in 144.265535ms
	I0131 03:20:14.323379 1465496 fix.go:56] fixHost completed within 20.659162262s
	I0131 03:20:14.323408 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.326649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.327063 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.327386 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.327693 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.327882 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.328068 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.328260 1465496 main.go:141] libmachine: Using SSH client type: native
	I0131 03:20:14.328638 1465496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.23 22 <nil> <nil>}
	I0131 03:20:14.328668 1465496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0131 03:20:14.464275 1465496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706671214.411008277
	
	I0131 03:20:14.464299 1465496 fix.go:206] guest clock: 1706671214.411008277
	I0131 03:20:14.464307 1465496 fix.go:219] Guest: 2024-01-31 03:20:14.411008277 +0000 UTC Remote: 2024-01-31 03:20:14.32338512 +0000 UTC m=+358.954052365 (delta=87.623157ms)
	I0131 03:20:14.464327 1465496 fix.go:190] guest clock delta is within tolerance: 87.623157ms
	I0131 03:20:14.464332 1465496 start.go:83] releasing machines lock for "no-preload-625812", held for 20.800154018s
	I0131 03:20:14.464349 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.464664 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:14.467627 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.467912 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.467952 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.468086 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468622 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468827 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:20:14.468918 1465496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0131 03:20:14.468974 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.469103 1465496 ssh_runner.go:195] Run: cat /version.json
	I0131 03:20:14.469143 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:20:14.471884 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472243 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472408 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472472 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472507 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:14.472426 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472696 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:14.472810 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:20:14.472825 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473046 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:20:14.473048 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473275 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.473288 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:20:14.473547 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:20:14.563583 1465496 ssh_runner.go:195] Run: systemctl --version
	I0131 03:20:14.602977 1465496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0131 03:20:14.752069 1465496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0131 03:20:14.759056 1465496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0131 03:20:14.759149 1465496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0131 03:20:14.778064 1465496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0131 03:20:14.778102 1465496 start.go:475] detecting cgroup driver to use...
	I0131 03:20:14.778197 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0131 03:20:14.791672 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0131 03:20:14.803938 1465496 docker.go:217] disabling cri-docker service (if available) ...
	I0131 03:20:14.804018 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0131 03:20:14.816689 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0131 03:20:14.829415 1465496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0131 03:20:14.956428 1465496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0131 03:20:15.082172 1465496 docker.go:233] disabling docker service ...
	I0131 03:20:15.082260 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0131 03:20:15.094675 1465496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0131 03:20:15.106262 1465496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0131 03:20:15.229460 1465496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0131 03:20:15.341585 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0131 03:20:15.354587 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0131 03:20:15.374141 1465496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0131 03:20:15.374228 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.386153 1465496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0131 03:20:15.386224 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.398130 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.407759 1465496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0131 03:20:15.417278 1465496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0131 03:20:15.427128 1465496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0131 03:20:15.437249 1465496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0131 03:20:15.437318 1465496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0131 03:20:15.451522 1465496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0131 03:20:15.460741 1465496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0131 03:20:15.564813 1465496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0131 03:20:15.729334 1465496 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0131 03:20:15.729436 1465496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0131 03:20:15.734544 1465496 start.go:543] Will wait 60s for crictl version
	I0131 03:20:15.734634 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:15.738536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0131 03:20:15.789942 1465496 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0131 03:20:15.790066 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.844864 1465496 ssh_runner.go:195] Run: crio --version
	I0131 03:20:15.895286 1465496 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0131 03:20:13.649824 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.150192 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.649250 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:14.677858 1466459 api_server.go:72] duration metric: took 2.528895825s to wait for apiserver process to appear ...
	I0131 03:20:14.677890 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:14.677920 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:14.688429 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:17.190684 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:15.896701 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetIP
	I0131 03:20:15.899655 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900079 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:20:15.900105 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:20:15.900392 1465496 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0131 03:20:15.904607 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:15.916202 1465496 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 03:20:15.916255 1465496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0131 03:20:15.964126 1465496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0131 03:20:15.964157 1465496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0131 03:20:15.964213 1465496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.964249 1465496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.964291 1465496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.964278 1465496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.964411 1465496 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0131 03:20:15.964472 1465496 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.964696 1465496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.964771 1465496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:15.965842 1465496 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:15.966659 1465496 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0131 03:20:15.966705 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:15.966716 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:15.966737 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:15.967221 1465496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:15.967386 1465496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.157890 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.160428 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0131 03:20:16.170727 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.185791 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.209517 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.212835 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.215809 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.221405 1465496 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0131 03:20:16.221457 1465496 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.221504 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369265 1465496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0131 03:20:16.369302 1465496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0131 03:20:16.369324 1465496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.369340 1465496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.369344 1465496 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0131 03:20:16.369367 1465496 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.369382 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369392 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369404 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369474 1465496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0131 03:20:16.369494 1465496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.369506 1465496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0131 03:20:16.369521 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369529 1465496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.369562 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:16.369617 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0131 03:20:16.384313 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0131 03:20:16.384273 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0131 03:20:16.384333 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0131 03:20:16.470950 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0131 03:20:16.471044 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0131 03:20:16.471091 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.496271 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0131 03:20:16.496296 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496398 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496485 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:16.496488 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:16.496338 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.496494 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:16.496730 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:16.531464 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531550 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0131 03:20:16.531570 1465496 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531594 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:16.531640 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0131 03:20:16.531595 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0131 03:20:16.531669 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531638 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0131 03:20:16.531738 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0131 03:20:16.536091 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0131 03:20:16.805880 1465496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339660 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.807978952s)
	I0131 03:20:20.339703 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0131 03:20:20.339719 1465496 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.533795146s)
	I0131 03:20:20.339744 1465496 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339785 1465496 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0131 03:20:20.339823 1465496 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:20.339829 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0131 03:20:20.339863 1465496 ssh_runner.go:195] Run: which crictl
	I0131 03:20:19.144422 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.144461 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.144481 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.199050 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:19.199092 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:19.199110 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.248370 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.248405 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:19.678887 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:19.699942 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:19.699975 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.178212 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.196360 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:20.196408 1466459 api_server.go:103] status: https://192.168.39.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:20.679003 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:20:20.685599 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:20:20.693909 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:20:20.693939 1466459 api_server.go:131] duration metric: took 6.016042033s to wait for apiserver health ...
	I0131 03:20:20.693972 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:20:20.693978 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:20.695935 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:20.697296 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:20.708301 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:20.730496 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:20.741756 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:20.741799 1466459 system_pods.go:61] "coredns-5dd5756b68-ntmxp" [bb90dd61-c60a-4beb-b77c-66c4b5ce56a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:20.741810 1466459 system_pods.go:61] "etcd-embed-certs-958254" [69a5883a-307d-47d1-86ef-6f76bf77bdff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:20.741830 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [1cad3813-0df9-4729-862f-d1ab237d297c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:20.741841 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [34bfed89-5c8c-4294-843b-d32261c8fb5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:20.741851 1466459 system_pods.go:61] "kube-proxy-q6dmr" [092e0786-80f7-480c-8ede-95e11c1f17a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:20.741862 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [28c8d75e-9517-4ccc-85ef-5b535973c829] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:20.741876 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-d8x5f" [fc69fea8-ab7b-4f3d-980f-7ad995027e77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:20.741889 1466459 system_pods.go:61] "storage-provisioner" [5026a00d-8df8-408a-a164-cf22697260e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:20.741898 1466459 system_pods.go:74] duration metric: took 11.375298ms to wait for pod list to return data ...
	I0131 03:20:20.741912 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:20.748073 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:20.748110 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:20.748125 1466459 node_conditions.go:105] duration metric: took 6.206594ms to run NodePressure ...
	I0131 03:20:20.748147 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:21.022867 1466459 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028572 1466459 kubeadm.go:787] kubelet initialised
	I0131 03:20:21.028600 1466459 kubeadm.go:788] duration metric: took 5.696903ms waiting for restarted kubelet to initialise ...
	I0131 03:20:21.028612 1466459 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:21.034373 1466459 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.040977 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041008 1466459 pod_ready.go:81] duration metric: took 6.605955ms waiting for pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.041021 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "coredns-5dd5756b68-ntmxp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.041029 1466459 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.047304 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047360 1466459 pod_ready.go:81] duration metric: took 6.317423ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.047379 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "etcd-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.047397 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:21.054356 1466459 pod_ready.go:97] node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054380 1466459 pod_ready.go:81] duration metric: took 6.969808ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	E0131 03:20:21.054393 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-958254" hosting pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-958254" has status "Ready":"False"
	I0131 03:20:21.054405 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.066327 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:19.688890 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.187659 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:22.403415 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.063558989s)
	I0131 03:20:22.403448 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0131 03:20:22.403467 1465496 ssh_runner.go:235] Completed: which crictl: (2.063583602s)
	I0131 03:20:22.403536 1465496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:20:22.403473 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.403667 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0131 03:20:22.453126 1465496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0131 03:20:22.453255 1465496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:25.325221 1465496 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.871938157s)
	I0131 03:20:25.325266 1465496 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0131 03:20:25.325371 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.92167713s)
	I0131 03:20:25.325397 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0131 03:20:25.325430 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.325498 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0131 03:20:25.562106 1466459 pod_ready.go:102] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.562702 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.562730 1466459 pod_ready.go:81] duration metric: took 5.508313651s waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.562740 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570741 1466459 pod_ready.go:92] pod "kube-proxy-q6dmr" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:26.570776 1466459 pod_ready.go:81] duration metric: took 8.02796ms waiting for pod "kube-proxy-q6dmr" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.570788 1466459 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.532998 1465727 kubeadm.go:787] kubelet initialised
	I0131 03:20:23.533031 1465727 kubeadm.go:788] duration metric: took 39.585413252s waiting for restarted kubelet to initialise ...
	I0131 03:20:23.533041 1465727 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:23.538956 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545637 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.545665 1465727 pod_ready.go:81] duration metric: took 6.67341ms waiting for pod "coredns-5644d7b6d9-2g2fj" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.545679 1465727 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552018 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.552047 1465727 pod_ready.go:81] duration metric: took 6.359089ms waiting for pod "coredns-5644d7b6d9-8zt79" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.552061 1465727 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557416 1465727 pod_ready.go:92] pod "etcd-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.557446 1465727 pod_ready.go:81] duration metric: took 5.375834ms waiting for pod "etcd-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.557458 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563429 1465727 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.563458 1465727 pod_ready.go:81] duration metric: took 5.99092ms waiting for pod "kube-apiserver-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.563470 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931088 1465727 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:23.931123 1465727 pod_ready.go:81] duration metric: took 367.644608ms waiting for pod "kube-controller-manager-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:23.931135 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330635 1465727 pod_ready.go:92] pod "kube-proxy-7dtkz" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.330663 1465727 pod_ready.go:81] duration metric: took 399.520658ms waiting for pod "kube-proxy-7dtkz" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.330673 1465727 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731521 1465727 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:24.731554 1465727 pod_ready.go:81] duration metric: took 400.873461ms waiting for pod "kube-scheduler-old-k8s-version-711547" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:24.731568 1465727 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:26.738444 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:24.686688 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:26.688623 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:29.186579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.180697 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.855170809s)
	I0131 03:20:28.180729 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0131 03:20:28.180767 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:28.180841 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0131 03:20:29.652395 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.471522862s)
	I0131 03:20:29.652425 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0131 03:20:29.652463 1465496 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:29.652540 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0131 03:20:28.578108 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.077401 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.080970 1466459 pod_ready.go:102] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:28.739586 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:30.739736 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.238815 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.187176 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:33.188862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:31.502715 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.85014178s)
	I0131 03:20:31.502759 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0131 03:20:31.502787 1465496 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:31.502844 1465496 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0131 03:20:32.554143 1465496 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.051250967s)
	I0131 03:20:32.554188 1465496 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0131 03:20:32.554229 1465496 cache_images.go:123] Successfully loaded all cached images
	I0131 03:20:32.554282 1465496 cache_images.go:92] LoadImages completed in 16.590108265s
	I0131 03:20:32.554386 1465496 ssh_runner.go:195] Run: crio config
	I0131 03:20:32.619584 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:32.619612 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:32.619637 1465496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0131 03:20:32.619665 1465496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-625812 NodeName:no-preload-625812 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0131 03:20:32.619840 1465496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-625812"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0131 03:20:32.619939 1465496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-625812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0131 03:20:32.620017 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0131 03:20:32.628855 1465496 binaries.go:44] Found k8s binaries, skipping transfer
	I0131 03:20:32.628963 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0131 03:20:32.636481 1465496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0131 03:20:32.654320 1465496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0131 03:20:32.670366 1465496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0131 03:20:32.688615 1465496 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0131 03:20:32.692444 1465496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0131 03:20:32.705599 1465496 certs.go:56] Setting up /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812 for IP: 192.168.72.23
	I0131 03:20:32.705644 1465496 certs.go:190] acquiring lock for shared ca certs: {Name:mkc319ee0a4bc97503f4ba5a7d8209b0def8c91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:20:32.705822 1465496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key
	I0131 03:20:32.705894 1465496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key
	I0131 03:20:32.705997 1465496 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/client.key
	I0131 03:20:32.706058 1465496 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key.a30a8404
	I0131 03:20:32.706092 1465496 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key
	I0131 03:20:32.706194 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem (1338 bytes)
	W0131 03:20:32.706221 1465496 certs.go:433] ignoring /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976_empty.pem, impossibly tiny 0 bytes
	I0131 03:20:32.706231 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca-key.pem (1679 bytes)
	I0131 03:20:32.706258 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/ca.pem (1078 bytes)
	I0131 03:20:32.706284 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/cert.pem (1123 bytes)
	I0131 03:20:32.706310 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/certs/key.pem (1679 bytes)
	I0131 03:20:32.706349 1465496 certs.go:437] found cert: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem (1708 bytes)
	I0131 03:20:32.707138 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0131 03:20:32.729972 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0131 03:20:32.753498 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0131 03:20:32.775599 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/no-preload-625812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0131 03:20:32.799455 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0131 03:20:32.822732 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0131 03:20:32.845839 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0131 03:20:32.868933 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0131 03:20:32.891565 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/ssl/certs/14199762.pem --> /usr/share/ca-certificates/14199762.pem (1708 bytes)
	I0131 03:20:32.914752 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0131 03:20:32.937305 1465496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18051-1412717/.minikube/certs/1419976.pem --> /usr/share/ca-certificates/1419976.pem (1338 bytes)
	I0131 03:20:32.960253 1465496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0131 03:20:32.976285 1465496 ssh_runner.go:195] Run: openssl version
	I0131 03:20:32.981630 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0131 03:20:32.990533 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994914 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 31 02:05 /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:32.994986 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0131 03:20:33.000249 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0131 03:20:33.009516 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1419976.pem && ln -fs /usr/share/ca-certificates/1419976.pem /etc/ssl/certs/1419976.pem"
	I0131 03:20:33.018643 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023046 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 31 02:15 /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.023106 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1419976.pem
	I0131 03:20:33.028238 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1419976.pem /etc/ssl/certs/51391683.0"
	I0131 03:20:33.036925 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14199762.pem && ln -fs /usr/share/ca-certificates/14199762.pem /etc/ssl/certs/14199762.pem"
	I0131 03:20:33.045708 1465496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050442 1465496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 31 02:15 /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.050536 1465496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14199762.pem
	I0131 03:20:33.056067 1465496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14199762.pem /etc/ssl/certs/3ec20f2e.0"
	I0131 03:20:33.065200 1465496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0131 03:20:33.069489 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0131 03:20:33.075140 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0131 03:20:33.080981 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0131 03:20:33.087018 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0131 03:20:33.092665 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0131 03:20:33.099605 1465496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0131 03:20:33.106207 1465496 kubeadm.go:404] StartCluster: {Name:no-preload-625812 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-625812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 03:20:33.106310 1465496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0131 03:20:33.106376 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:33.150992 1465496 cri.go:89] found id: ""
	I0131 03:20:33.151088 1465496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0131 03:20:33.161105 1465496 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0131 03:20:33.161131 1465496 kubeadm.go:636] restartCluster start
	I0131 03:20:33.161219 1465496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0131 03:20:33.170638 1465496 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.172109 1465496 kubeconfig.go:92] found "no-preload-625812" server: "https://192.168.72.23:8443"
	I0131 03:20:33.175582 1465496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0131 03:20:33.185433 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.185523 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.196952 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.685512 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:33.685612 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:33.696682 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.186433 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.197957 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:34.685533 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:34.685640 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:34.696731 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:35.186267 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.186369 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.197982 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:33.578014 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:33.578038 1466459 pod_ready.go:81] duration metric: took 7.007241801s waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:33.578047 1466459 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:35.585039 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.585299 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.737680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:37.740698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686379 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:38.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:35.686193 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:35.686284 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:35.697343 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.185858 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.185960 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.197161 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:36.685546 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:36.685646 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:36.696796 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.186186 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.186280 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.197357 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:37.685916 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:37.686012 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:37.700288 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.185723 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.185820 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.197397 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:38.685651 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:38.685757 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:38.697204 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.185744 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.185844 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.198598 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:39.686185 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:39.686267 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:39.697736 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.186339 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.186432 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.198099 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:40.085028 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.585359 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.238117 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:42.239129 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.687687 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:43.186737 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:40.686132 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:40.686236 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:40.699172 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.185642 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.185744 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.198284 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:41.685827 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:41.685935 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:41.698501 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.185953 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.186088 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.196802 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:42.686371 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:42.686445 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:42.698536 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.186445 1465496 api_server.go:166] Checking apiserver status ...
	I0131 03:20:43.186560 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0131 03:20:43.198640 1465496 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0131 03:20:43.198679 1465496 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0131 03:20:43.198690 1465496 kubeadm.go:1135] stopping kube-system containers ...
	I0131 03:20:43.198704 1465496 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0131 03:20:43.198765 1465496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0131 03:20:43.235648 1465496 cri.go:89] found id: ""
	I0131 03:20:43.235740 1465496 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0131 03:20:43.252848 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:20:43.263501 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:20:43.263590 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274044 1465496 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0131 03:20:43.274075 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:43.402961 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.454642 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.051640672s)
	I0131 03:20:44.454673 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.660185 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.744795 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:44.816577 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:20:44.816690 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:45.316895 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:44.591170 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.085954 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:44.739730 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.240982 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.686082 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:47.687451 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:45.816800 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.317657 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:46.816892 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.317696 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:20:47.342389 1465496 api_server.go:72] duration metric: took 2.525810484s to wait for apiserver process to appear ...
	I0131 03:20:47.342423 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:20:47.342448 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.385155 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.385192 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.385206 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.431253 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0131 03:20:51.431293 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0131 03:20:51.842624 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:51.847644 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:51.847685 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.343335 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.348723 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0131 03:20:52.348780 1465496 api_server.go:103] status: https://192.168.72.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0131 03:20:52.842935 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:20:52.848263 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:20:52.863072 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:20:52.863104 1465496 api_server.go:131] duration metric: took 5.520672047s to wait for apiserver health ...
	I0131 03:20:52.863113 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:20:52.863120 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:20:52.865141 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:20:49.585837 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.087030 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:49.738408 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:51.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:50.186754 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.197217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:52.866822 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:20:52.881451 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:20:52.918954 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:20:52.930533 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:20:52.930566 1465496 system_pods.go:61] "coredns-76f75df574-4qhpt" [9a5c2a49-f787-456a-9d15-cea2e111c6fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0131 03:20:52.930575 1465496 system_pods.go:61] "etcd-no-preload-625812" [2dbdb2c3-dd04-40de-80b4-caf18f1df2e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0131 03:20:52.930587 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [fd209808-5ebc-464e-b14b-88c6c830d7bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0131 03:20:52.930593 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [1f2cb9ec-cec9-4c45-8b78-0c9a9c0c9821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0131 03:20:52.930600 1465496 system_pods.go:61] "kube-proxy-8fdx9" [d1311d92-482b-4aa2-9dd3-053597717aea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0131 03:20:52.930607 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [f7b0ba21-6c1d-4c67-aa69-6086b28ddf78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0131 03:20:52.930614 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-sjndx" [6bcdb3bb-4e28-4127-a273-091b44059d10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:20:52.930620 1465496 system_pods.go:61] "storage-provisioner" [66a4003b-e35e-4216-8d27-e8897a6ddc71] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0131 03:20:52.930627 1465496 system_pods.go:74] duration metric: took 11.645516ms to wait for pod list to return data ...
	I0131 03:20:52.930635 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:20:52.943250 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:20:52.943291 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:20:52.943306 1465496 node_conditions.go:105] duration metric: took 12.665118ms to run NodePressure ...
	I0131 03:20:52.943328 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0131 03:20:53.231968 1465496 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239131 1465496 kubeadm.go:787] kubelet initialised
	I0131 03:20:53.239162 1465496 kubeadm.go:788] duration metric: took 7.159608ms waiting for restarted kubelet to initialise ...
	I0131 03:20:53.239171 1465496 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:20:53.248561 1465496 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:55.256463 1465496 pod_ready.go:102] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.585633 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.086475 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.239922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.738132 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:54.686904 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:56.687249 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.187579 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:57.261900 1465496 pod_ready.go:92] pod "coredns-76f75df574-4qhpt" in "kube-system" namespace has status "Ready":"True"
	I0131 03:20:57.261928 1465496 pod_ready.go:81] duration metric: took 4.013340748s waiting for pod "coredns-76f75df574-4qhpt" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:57.261940 1465496 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:20:59.268779 1465496 pod_ready.go:102] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:59.586066 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:02.085212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:20:58.739138 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.739184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:03.243732 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:01.686704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.186767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:00.771061 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:00.771093 1465496 pod_ready.go:81] duration metric: took 3.509144879s waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:00.771107 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279749 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.279778 1465496 pod_ready.go:81] duration metric: took 1.508661327s waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.279792 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286520 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.286550 1465496 pod_ready.go:81] duration metric: took 6.748377ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.286564 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292455 1465496 pod_ready.go:92] pod "kube-proxy-8fdx9" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:02.292479 1465496 pod_ready.go:81] duration metric: took 5.904786ms waiting for pod "kube-proxy-8fdx9" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:02.292491 1465496 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:04.300076 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:04.086312 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.086965 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:05.737969 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:07.738025 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.686645 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:09.186769 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.300932 1465496 pod_ready.go:102] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:06.799183 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:21:06.799208 1465496 pod_ready.go:81] duration metric: took 4.506710382s waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:06.799220 1465496 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	I0131 03:21:08.806102 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:08.585128 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.586208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.085360 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:10.238339 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:12.739920 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.186807 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.686030 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:11.306903 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:13.808471 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.085478 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.584968 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.238994 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.738301 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:15.686243 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:17.687966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:16.306169 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:18.306368 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.585283 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.085635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:19.738554 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:21.739391 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.186216 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:22.186318 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.186605 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:20.807270 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:23.307367 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.086508 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.585310 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:24.239650 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.739133 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:26.687020 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.186319 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:25.807083 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:27.807373 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.809229 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:28.586494 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.085758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.086070 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:29.237951 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.239234 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:31.186403 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.186539 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:32.305137 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:34.306664 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.586212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.085235 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:33.737751 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.239168 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:35.187669 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:37.686468 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:36.806650 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:39.305925 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.586428 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.084565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:38.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.739723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.237973 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:40.186321 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:42.187314 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:44.188149 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:41.307318 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:43.806323 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.085539 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.585341 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.239462 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:47.738184 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:46.686042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.686866 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:45.806734 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:48.305446 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.305723 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.085346 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.085442 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:49.738268 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.239669 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:50.691518 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:53.186195 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:52.306654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.806020 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.085761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.586368 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:54.738548 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.739623 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:55.686288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:57.687383 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:56.807570 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.309552 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.084865 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.085071 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.085111 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:21:59.239410 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.239532 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:00.186408 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:02.186782 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.186839 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:01.806329 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:04.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.584749 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:07.586565 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:03.739463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:05.740128 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.237766 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.187392 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.685886 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:06.805996 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:08.807179 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.086003 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.585799 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.238067 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.239177 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:10.686223 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:12.686341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:11.305779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:13.307616 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.085808 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.584477 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:14.738859 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.238767 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.187173 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:17.687034 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:15.806730 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:18.306392 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.584606 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.585553 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:19.738470 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:21.739486 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.185802 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:22.187625 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:20.806949 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.306121 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:25.306685 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:23.585692 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.085348 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.237900 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.238299 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:24.686574 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:26.687740 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.186290 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:27.805534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:29.806722 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.585853 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.087573 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:28.738699 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:30.740922 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.241273 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:31.687338 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.186661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:32.306153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:34.306543 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:33.584981 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.585484 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.085009 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:35.739413 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.240386 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.687329 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:39.185388 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:36.308028 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:38.806629 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.085644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.585560 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:40.242599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:42.737723 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.186288 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.186859 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:41.306389 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:43.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.586579 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.085969 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:44.739244 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.237508 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:45.188774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:47.687222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:46.306909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:48.807077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.584667 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.584768 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.239422 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.738290 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:49.687896 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:52.188700 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:51.306677 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.806006 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:53.585081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.585777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.085122 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.237822 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:56.238861 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:54.686276 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:57.186263 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:55.806184 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.306128 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.306364 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.588396 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.598213 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:58.737414 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:00.737727 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.739935 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:22:59.685823 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:01.686758 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:04.185852 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:02.807107 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.305740 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.085415 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.585036 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:05.239645 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.739347 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:06.686504 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:08.687322 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:07.305816 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.305938 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:09.586253 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.085522 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:10.239099 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:12.738591 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.186874 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.686181 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:11.306129 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:13.806507 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.585172 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.586137 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:14.738697 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.739523 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:15.686511 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:17.687193 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:16.306767 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.808302 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:19.085852 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.586641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:18.739573 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.238839 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:20.187546 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:22.687140 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:21.306401 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.307029 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.085548 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:26.586436 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:23.737681 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.737740 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.738454 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:24.687572 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:27.186506 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:25.808456 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:28.306607 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:30.307207 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.085660 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.087058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.739207 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.238687 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:29.686331 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:31.688381 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.187104 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:32.805987 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.806181 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:33.586190 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.085219 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.085516 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:34.238857 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.239092 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.687993 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.688870 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:36.806571 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.808335 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.085919 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.585866 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:38.738192 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:40.738455 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:42.739283 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.185567 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.186680 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:41.307589 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:43.309027 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:44.586117 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.085597 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.238409 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.240204 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.685781 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.686167 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:45.807531 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:47.807973 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:50.308410 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.086271 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.086456 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.737691 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.739418 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:49.686475 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:51.687616 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.186599 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:52.806510 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.806619 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:53.586673 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.085541 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.085777 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:54.238680 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.238735 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.239259 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.685972 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.686560 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:56.806707 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:23:58.806764 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.087035 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.088546 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.239507 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.240463 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.686709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:02.687576 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:00.806909 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:03.306534 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.307522 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.585131 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.585178 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:04.738411 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:06.738605 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:05.186000 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.686048 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:07.806058 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.306442 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:08.585611 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.088448 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:09.238896 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:11.239934 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:10.186391 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.187940 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:12.680057 1465898 pod_ready.go:81] duration metric: took 4m0.000955013s waiting for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:12.680105 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-fct8b" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:12.680132 1465898 pod_ready.go:38] duration metric: took 4m8.549185211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:12.680181 1465898 kubeadm.go:640] restartCluster took 4m32.094843295s
	W0131 03:24:12.680310 1465898 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:12.680376 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:12.307149 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:14.307483 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.586901 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.087404 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:13.738698 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.239338 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.239499 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:16.806617 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:19.305298 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:18.585870 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.087112 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:20.737368 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:22.738599 1465727 pod_ready.go:102] pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:21.306715 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.807030 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:23.586072 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:25.586464 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.586525 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:24.731792 1465727 pod_ready.go:81] duration metric: took 4m0.00020412s waiting for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:24.731846 1465727 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-74d5856cc6-m4xn5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:24.731869 1465727 pod_ready.go:38] duration metric: took 4m1.198813077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:24.731907 1465727 kubeadm.go:640] restartCluster took 5m3.213957096s
	W0131 03:24:24.731983 1465727 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:24.732022 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:26.064348 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.383924825s)
	I0131 03:24:26.064423 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:26.076943 1465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:26.087474 1465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:26.095980 1465898 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:26.096026 1465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:26.286603 1465898 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:25.808330 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:27.809779 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.308001 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:30.087127 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:32.589212 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:31.227776 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.495715112s)
	I0131 03:24:31.227855 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:31.241889 1465727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:31.251082 1465727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:31.259843 1465727 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:31.259887 1465727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0131 03:24:31.469869 1465727 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:32.310672 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:34.808959 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:36.696825 1465898 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:36.696904 1465898 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:36.696998 1465898 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:36.697121 1465898 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:36.697231 1465898 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:36.697306 1465898 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:36.699102 1465898 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:36.699244 1465898 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:36.699334 1465898 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:36.699475 1465898 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:36.699584 1465898 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:36.699700 1465898 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:36.699785 1465898 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:36.699873 1465898 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:36.699958 1465898 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:36.700052 1465898 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:36.700172 1465898 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:36.700217 1465898 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:36.700283 1465898 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:36.700345 1465898 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:36.700406 1465898 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:36.700482 1465898 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:36.700549 1465898 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:36.700647 1465898 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:36.700731 1465898 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:36.702370 1465898 out.go:204]   - Booting up control plane ...
	I0131 03:24:36.702525 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:36.702658 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:36.702731 1465898 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:36.702855 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:36.702975 1465898 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:36.703038 1465898 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:36.703248 1465898 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:36.703360 1465898 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503117 seconds
	I0131 03:24:36.703517 1465898 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:36.703652 1465898 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:36.703734 1465898 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:36.703950 1465898 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-873005 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:36.704029 1465898 kubeadm.go:322] [bootstrap-token] Using token: 51ueuu.c5jl6zenf29j1pbj
	I0131 03:24:36.706123 1465898 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:36.706237 1465898 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:36.706316 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:36.706475 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:36.706662 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:36.706829 1465898 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:36.706946 1465898 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:36.707093 1465898 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:36.707179 1465898 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:36.707226 1465898 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:36.707236 1465898 kubeadm.go:322] 
	I0131 03:24:36.707310 1465898 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:36.707317 1465898 kubeadm.go:322] 
	I0131 03:24:36.707411 1465898 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:36.707418 1465898 kubeadm.go:322] 
	I0131 03:24:36.707438 1465898 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:36.707518 1465898 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:36.707590 1465898 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:36.707604 1465898 kubeadm.go:322] 
	I0131 03:24:36.707693 1465898 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:36.707706 1465898 kubeadm.go:322] 
	I0131 03:24:36.707775 1465898 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:36.707785 1465898 kubeadm.go:322] 
	I0131 03:24:36.707834 1465898 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:36.707932 1465898 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:36.708029 1465898 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:36.708038 1465898 kubeadm.go:322] 
	I0131 03:24:36.708135 1465898 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:36.708236 1465898 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:36.708245 1465898 kubeadm.go:322] 
	I0131 03:24:36.708341 1465898 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708458 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:36.708490 1465898 kubeadm.go:322] 	--control-plane 
	I0131 03:24:36.708499 1465898 kubeadm.go:322] 
	I0131 03:24:36.708601 1465898 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:36.708611 1465898 kubeadm.go:322] 
	I0131 03:24:36.708703 1465898 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 51ueuu.c5jl6zenf29j1pbj \
	I0131 03:24:36.708836 1465898 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:36.708855 1465898 cni.go:84] Creating CNI manager for ""
	I0131 03:24:36.708865 1465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:36.710643 1465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:33.579236 1466459 pod_ready.go:81] duration metric: took 4m0.001168183s waiting for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" ...
	E0131 03:24:33.579284 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-d8x5f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:24:33.579320 1466459 pod_ready.go:38] duration metric: took 4m12.550695133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:33.579357 1466459 kubeadm.go:640] restartCluster took 4m32.725356038s
	W0131 03:24:33.579451 1466459 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:24:33.579495 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:24:36.712379 1465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:36.727135 1465898 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:36.752650 1465898 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:36.752760 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.752766 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=default-k8s-diff-port-873005 minikube.k8s.io/updated_at=2024_01_31T03_24_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.833601 1465898 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:37.204982 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:37.706104 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.205928 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:38.705169 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:39.205448 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:36.810623 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:39.308000 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:44.456046 1465727 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0131 03:24:44.456133 1465727 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:44.456239 1465727 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:44.456349 1465727 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:44.456507 1465727 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:44.456673 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:44.456815 1465727 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:44.456888 1465727 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0131 03:24:44.456975 1465727 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:44.458558 1465727 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:44.458637 1465727 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:44.458740 1465727 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:44.458837 1465727 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:44.458937 1465727 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:44.459040 1465727 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:44.459117 1465727 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:44.459212 1465727 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:44.459291 1465727 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:44.459385 1465727 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:44.459491 1465727 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:44.459552 1465727 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:44.459628 1465727 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:44.459691 1465727 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:44.459755 1465727 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:44.459827 1465727 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:44.459899 1465727 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:44.460002 1465727 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:44.461481 1465727 out.go:204]   - Booting up control plane ...
	I0131 03:24:44.461592 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:44.461687 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:44.461801 1465727 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:44.461930 1465727 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:44.462130 1465727 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:44.462255 1465727 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503405 seconds
	I0131 03:24:44.462398 1465727 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:44.462577 1465727 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:44.462653 1465727 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:44.462817 1465727 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-711547 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0131 03:24:44.462913 1465727 kubeadm.go:322] [bootstrap-token] Using token: etlsjx.t1u4cz6ewuek932w
	I0131 03:24:44.465248 1465727 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:44.465404 1465727 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:44.465615 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:44.465805 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:44.465987 1465727 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:44.466088 1465727 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:44.466170 1465727 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:44.466239 1465727 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:44.466247 1465727 kubeadm.go:322] 
	I0131 03:24:44.466332 1465727 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:44.466354 1465727 kubeadm.go:322] 
	I0131 03:24:44.466456 1465727 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:44.466473 1465727 kubeadm.go:322] 
	I0131 03:24:44.466524 1465727 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:44.466596 1465727 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:44.466677 1465727 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:44.466696 1465727 kubeadm.go:322] 
	I0131 03:24:44.466764 1465727 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:44.466870 1465727 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:44.466971 1465727 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:44.466988 1465727 kubeadm.go:322] 
	I0131 03:24:44.467085 1465727 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0131 03:24:44.467196 1465727 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:44.467208 1465727 kubeadm.go:322] 
	I0131 03:24:44.467300 1465727 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467443 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:44.467479 1465727 kubeadm.go:322]     --control-plane 	  
	I0131 03:24:44.467488 1465727 kubeadm.go:322] 
	I0131 03:24:44.467588 1465727 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:44.467599 1465727 kubeadm.go:322] 
	I0131 03:24:44.467695 1465727 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token etlsjx.t1u4cz6ewuek932w \
	I0131 03:24:44.467834 1465727 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:44.467849 1465727 cni.go:84] Creating CNI manager for ""
	I0131 03:24:44.467858 1465727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:44.470130 1465727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:39.705234 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.205164 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:40.705674 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.205045 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.705592 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.205813 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:42.705913 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.205465 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:43.705236 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.205365 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:41.807553 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:43.809153 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:47.613982 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.034446752s)
	I0131 03:24:47.614087 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:47.627141 1466459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:24:47.635785 1466459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:24:47.643856 1466459 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:24:47.643912 1466459 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:24:47.866988 1466459 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:24:44.472066 1465727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:44.484082 1465727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:44.503062 1465727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:44.503138 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.503164 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=old-k8s-version-711547 minikube.k8s.io/updated_at=2024_01_31T03_24_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.557194 1465727 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:44.796311 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.296601 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.796904 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.296474 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.796658 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.296647 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.796712 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.296469 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:44.705251 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.205696 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:45.705947 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.205519 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.705735 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.205285 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:47.706009 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.205416 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:48.705969 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.205783 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:46.306658 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:48.307077 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:50.311654 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:49.705636 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.205958 1465898 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.456803 1465898 kubeadm.go:1088] duration metric: took 13.704121927s to wait for elevateKubeSystemPrivileges.
	I0131 03:24:50.456854 1465898 kubeadm.go:406] StartCluster complete in 5m9.932475085s
	I0131 03:24:50.456883 1465898 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.457001 1465898 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:24:50.460015 1465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:24:50.460408 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:24:50.460617 1465898 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:24:50.460718 1465898 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460745 1465898 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.460753 1465898 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:24:50.460798 1465898 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.460831 1465898 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-873005"
	I0131 03:24:50.460855 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461315 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461342 1465898 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-873005"
	I0131 03:24:50.461361 1465898 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:50.461364 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0131 03:24:50.461369 1465898 addons.go:243] addon metrics-server should already be in state true
	I0131 03:24:50.461410 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.461322 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461644 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.461778 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.461812 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.460670 1465898 config.go:182] Loaded profile config "default-k8s-diff-port-873005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:24:50.486168 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0131 03:24:50.486189 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0131 03:24:50.486323 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0131 03:24:50.486737 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487153 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.487761 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.487781 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488055 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.488074 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.488193 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.488460 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.488587 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.488984 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.489649 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.489717 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.490413 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.490433 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.492357 1465898 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-873005"
	W0131 03:24:50.492372 1465898 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:24:50.492402 1465898 host.go:66] Checking if "default-k8s-diff-port-873005" exists ...
	I0131 03:24:50.492774 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.492815 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.493142 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.493853 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.493904 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.510041 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0131 03:24:50.510628 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.511294 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.511316 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.511749 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.511982 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.512352 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0131 03:24:50.512842 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.513435 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.513454 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.513922 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.513984 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.514319 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0131 03:24:50.516752 1465898 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:24:50.514718 1465898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:24:50.514788 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.518232 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:24:50.518238 1465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:24:50.518248 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:24:50.518271 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.521721 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.522659 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.522988 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.523038 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.523050 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.523231 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.523401 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.523571 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.526843 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.530691 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.532381 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.534246 1465898 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:24:50.535799 1465898 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.535826 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:24:50.535848 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.538666 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.538998 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.539031 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.539275 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.540037 1465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0131 03:24:50.540217 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.540435 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.540502 1465898 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:24:50.540575 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.541462 1465898 main.go:141] libmachine: Using API Version  1
	I0131 03:24:50.541480 1465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:24:50.541918 1465898 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:24:50.542136 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetState
	I0131 03:24:50.543588 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .DriverName
	I0131 03:24:50.546790 1465898 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.546807 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:24:50.546828 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHHostname
	I0131 03:24:50.549791 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550227 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:ab:c7", ip: ""} in network mk-default-k8s-diff-port-873005: {Iface:virbr1 ExpiryTime:2024-01-31 04:19:26 +0000 UTC Type:0 Mac:52:54:00:b6:ab:c7 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:default-k8s-diff-port-873005 Clientid:01:52:54:00:b6:ab:c7}
	I0131 03:24:50.550254 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | domain default-k8s-diff-port-873005 has defined IP address 192.168.61.123 and MAC address 52:54:00:b6:ab:c7 in network mk-default-k8s-diff-port-873005
	I0131 03:24:50.550545 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHPort
	I0131 03:24:50.550712 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHKeyPath
	I0131 03:24:50.550827 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .GetSSHUsername
	I0131 03:24:50.550914 1465898 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/default-k8s-diff-port-873005/id_rsa Username:docker}
	I0131 03:24:50.720404 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:24:50.750602 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:24:50.750631 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:24:50.770493 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:24:50.781740 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:24:50.831005 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:24:50.831037 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:24:50.957145 1465898 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:50.957195 1465898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:24:50.995868 1465898 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-873005" context rescaled to 1 replicas
	I0131 03:24:50.995924 1465898 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.123 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:24:50.997774 1465898 out.go:177] * Verifying Kubernetes components...
	I0131 03:24:50.999400 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:24:51.127181 1465898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:24:52.814257 1465898 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.093763301s)
	I0131 03:24:52.814295 1465898 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0131 03:24:53.442603 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.660817091s)
	I0131 03:24:53.442735 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.315510869s)
	I0131 03:24:53.442653 1465898 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.443214595s)
	I0131 03:24:53.442784 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442807 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442746 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442847 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.442800 1465898 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.442686 1465898 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.672154364s)
	I0131 03:24:53.442931 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.442944 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443178 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443204 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443234 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443271 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443290 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443307 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443324 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443326 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443342 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443355 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443370 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443443 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443463 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443474 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.443484 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.443555 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.443558 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443571 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443834 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.443843 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.443852 1465898 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-873005"
	I0131 03:24:53.443857 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.444009 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.444018 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:53.477413 1465898 main.go:141] libmachine: Making call to close driver server
	I0131 03:24:53.477442 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) Calling .Close
	I0131 03:24:53.477848 1465898 node_ready.go:49] node "default-k8s-diff-port-873005" has status "Ready":"True"
	I0131 03:24:53.477878 1465898 node_ready.go:38] duration metric: took 34.988647ms waiting for node "default-k8s-diff-port-873005" to be "Ready" ...
	I0131 03:24:53.477903 1465898 main.go:141] libmachine: (default-k8s-diff-port-873005) DBG | Closing plugin on server side
	I0131 03:24:53.477913 1465898 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:24:53.477891 1465898 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:24:53.477926 1465898 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:24:48.797209 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.296541 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:49.796400 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.297357 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:50.797175 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.297121 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:51.796457 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.297151 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:52.797043 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.296354 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:53.480701 1465898 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0131 03:24:53.482138 1465898 addons.go:505] enable addons completed in 3.021541847s: enabled=[storage-provisioner metrics-server default-storageclass]
	I0131 03:24:53.518183 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:52.806757 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:54.808761 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:53.796405 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.296358 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:54.796988 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.296633 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:55.797131 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.296750 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:56.797103 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.296955 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:57.796330 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.296387 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.837963 1466459 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0131 03:24:58.838075 1466459 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:24:58.838193 1466459 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:24:58.838328 1466459 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:24:58.838507 1466459 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:24:58.838599 1466459 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:24:58.840259 1466459 out.go:204]   - Generating certificates and keys ...
	I0131 03:24:58.840364 1466459 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:24:58.840490 1466459 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:24:58.840620 1466459 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:24:58.840718 1466459 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:24:58.840826 1466459 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:24:58.840905 1466459 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:24:58.841008 1466459 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:24:58.841106 1466459 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:24:58.841214 1466459 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:24:58.841304 1466459 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:24:58.841349 1466459 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:24:58.841420 1466459 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:24:58.841492 1466459 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:24:58.841553 1466459 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:24:58.841621 1466459 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:24:58.841694 1466459 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:24:58.841805 1466459 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:24:58.841887 1466459 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:24:58.843555 1466459 out.go:204]   - Booting up control plane ...
	I0131 03:24:58.843684 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:24:58.843804 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:24:58.843917 1466459 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:24:58.844072 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:24:58.844208 1466459 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:24:58.844297 1466459 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:24:58.844540 1466459 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:24:58.844657 1466459 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003861 seconds
	I0131 03:24:58.844797 1466459 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:24:58.844947 1466459 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:24:58.845022 1466459 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:24:58.845232 1466459 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-958254 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:24:58.845309 1466459 kubeadm.go:322] [bootstrap-token] Using token: ash1vg.z2czyygl2nysl4yb
	I0131 03:24:58.846832 1466459 out.go:204]   - Configuring RBAC rules ...
	I0131 03:24:58.846943 1466459 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:24:58.847042 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:24:58.847238 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:24:58.847445 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:24:58.847620 1466459 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:24:58.847735 1466459 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:24:58.847908 1466459 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:24:58.847969 1466459 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:24:58.848034 1466459 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:24:58.848045 1466459 kubeadm.go:322] 
	I0131 03:24:58.848142 1466459 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:24:58.848152 1466459 kubeadm.go:322] 
	I0131 03:24:58.848279 1466459 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:24:58.848308 1466459 kubeadm.go:322] 
	I0131 03:24:58.848355 1466459 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:24:58.848440 1466459 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:24:58.848515 1466459 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:24:58.848531 1466459 kubeadm.go:322] 
	I0131 03:24:58.848611 1466459 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:24:58.848622 1466459 kubeadm.go:322] 
	I0131 03:24:58.848684 1466459 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:24:58.848692 1466459 kubeadm.go:322] 
	I0131 03:24:58.848769 1466459 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:24:58.848884 1466459 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:24:58.848987 1466459 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:24:58.848994 1466459 kubeadm.go:322] 
	I0131 03:24:58.849127 1466459 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:24:58.849252 1466459 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:24:58.849265 1466459 kubeadm.go:322] 
	I0131 03:24:58.849390 1466459 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849540 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:24:58.849572 1466459 kubeadm.go:322] 	--control-plane 
	I0131 03:24:58.849587 1466459 kubeadm.go:322] 
	I0131 03:24:58.849698 1466459 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:24:58.849710 1466459 kubeadm.go:322] 
	I0131 03:24:58.849817 1466459 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ash1vg.z2czyygl2nysl4yb \
	I0131 03:24:58.849963 1466459 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:24:58.849981 1466459 cni.go:84] Creating CNI manager for ""
	I0131 03:24:58.849991 1466459 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:24:58.851748 1466459 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:24:54.532127 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.532155 1465898 pod_ready.go:81] duration metric: took 1.013942045s waiting for pod "coredns-5dd5756b68-2jm8s" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.532164 1465898 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537895 1465898 pod_ready.go:92] pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.537924 1465898 pod_ready.go:81] duration metric: took 5.752669ms waiting for pod "coredns-5dd5756b68-5gdks" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.537937 1465898 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543819 1465898 pod_ready.go:92] pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.543850 1465898 pod_ready.go:81] duration metric: took 5.903392ms waiting for pod "etcd-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.543863 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549279 1465898 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.549303 1465898 pod_ready.go:81] duration metric: took 5.431331ms waiting for pod "kube-apiserver-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.549315 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647791 1465898 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:54.647830 1465898 pod_ready.go:81] duration metric: took 98.504261ms waiting for pod "kube-controller-manager-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:54.647846 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446878 1465898 pod_ready.go:92] pod "kube-proxy-blwwq" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.446913 1465898 pod_ready.go:81] duration metric: took 799.058225ms waiting for pod "kube-proxy-blwwq" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.446927 1465898 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848226 1465898 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace has status "Ready":"True"
	I0131 03:24:55.848261 1465898 pod_ready.go:81] duration metric: took 401.323547ms waiting for pod "kube-scheduler-default-k8s-diff-port-873005" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:55.848275 1465898 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	I0131 03:24:57.855091 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:57.306243 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:59.307152 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:24:58.796423 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.297312 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.796598 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.296932 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.797306 1465727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.963954 1465727 kubeadm.go:1088] duration metric: took 16.460870964s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:00.964007 1465727 kubeadm.go:406] StartCluster complete in 5m39.492487154s
	I0131 03:25:00.964037 1465727 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.964135 1465727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:00.965942 1465727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:00.966222 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:00.966379 1465727 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:00.966464 1465727 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966478 1465727 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966474 1465727 config.go:182] Loaded profile config "old-k8s-version-711547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0131 03:25:00.966502 1465727 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-711547"
	I0131 03:25:00.966514 1465727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-711547"
	I0131 03:25:00.966522 1465727 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-711547"
	W0131 03:25:00.966531 1465727 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:00.966493 1465727 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-711547"
	W0131 03:25:00.966557 1465727 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:00.966579 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966610 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.966981 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.966993 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967028 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967040 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.967142 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.967186 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.986034 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0131 03:25:00.986291 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0131 03:25:00.986619 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.986746 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.987299 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987320 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987467 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.987479 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.987834 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.988010 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:00.988075 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0131 03:25:00.988399 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:00.989011 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:00.989031 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:00.989620 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.990204 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.990247 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.990830 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:00.991921 1465727 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-711547"
	W0131 03:25:00.991946 1465727 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:00.991979 1465727 host.go:66] Checking if "old-k8s-version-711547" exists ...
	I0131 03:25:00.992390 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.992429 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:00.996772 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:00.996817 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.009234 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0131 03:25:01.009861 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.010560 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.010580 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.011185 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.011401 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.013070 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0131 03:25:01.013907 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.014029 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.016324 1465727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:01.014597 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.017922 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.018046 1465727 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.018070 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:01.018094 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.018526 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.019101 1465727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:01.019150 1465727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:01.019442 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38673
	I0131 03:25:01.019888 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.020393 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.020424 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.020822 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.020992 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.021500 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022222 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.022242 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.022449 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.022654 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.022821 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.022997 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.023406 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.025473 1465727 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:01.026870 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:01.026888 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:01.026904 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.029751 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030085 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.030100 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.030398 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.030647 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.030818 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.030977 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.037553 1465727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0131 03:25:01.038049 1465727 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:01.038517 1465727 main.go:141] libmachine: Using API Version  1
	I0131 03:25:01.038542 1465727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:01.038963 1465727 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:01.039329 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetState
	I0131 03:25:01.041534 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .DriverName
	I0131 03:25:01.042115 1465727 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.042137 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:01.042170 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHHostname
	I0131 03:25:01.045444 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.045973 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:2a:99", ip: ""} in network mk-old-k8s-version-711547: {Iface:virbr2 ExpiryTime:2024-01-31 04:19:04 +0000 UTC Type:0 Mac:52:54:00:1b:2a:99 Iaid: IPaddr:192.168.50.63 Prefix:24 Hostname:old-k8s-version-711547 Clientid:01:52:54:00:1b:2a:99}
	I0131 03:25:01.045992 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | domain old-k8s-version-711547 has defined IP address 192.168.50.63 and MAC address 52:54:00:1b:2a:99 in network mk-old-k8s-version-711547
	I0131 03:25:01.046187 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHPort
	I0131 03:25:01.046374 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHKeyPath
	I0131 03:25:01.046619 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .GetSSHUsername
	I0131 03:25:01.046751 1465727 sshutil.go:53] new ssh client: &{IP:192.168.50.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/old-k8s-version-711547/id_rsa Username:docker}
	I0131 03:25:01.284926 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:01.284951 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:01.298019 1465727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:01.338666 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:01.364117 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:01.383424 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:01.383460 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:01.499627 1465727 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.499676 1465727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:01.557563 1465727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:01.633792 1465727 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-711547" context rescaled to 1 replicas
	I0131 03:25:01.633844 1465727 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.63 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:01.636944 1465727 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:01.638596 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:02.375769 1465727 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.07770508s)
	I0131 03:25:02.375806 1465727 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:02.849278 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.485115978s)
	I0131 03:25:02.849343 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849348 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.510642603s)
	I0131 03:25:02.849361 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849397 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849411 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849431 1465727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.291827391s)
	I0131 03:25:02.849463 1465727 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.210839065s)
	I0131 03:25:02.849466 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.849478 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.849490 1465727 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.851686 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851687 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851705 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851714 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851701 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851724 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.851732 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851715 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.851726 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851744 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851749 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851754 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.851736 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.851812 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.851828 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.852136 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852158 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852178 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852187 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852194 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852203 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.852214 1465727 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-711547"
	I0131 03:25:02.852220 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.852249 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.852257 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.878278 1465727 node_ready.go:49] node "old-k8s-version-711547" has status "Ready":"True"
	I0131 03:25:02.878313 1465727 node_ready.go:38] duration metric: took 28.809729ms waiting for node "old-k8s-version-711547" to be "Ready" ...
	I0131 03:25:02.878339 1465727 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:02.906619 1465727 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:02.910781 1465727 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:02.910809 1465727 main.go:141] libmachine: (old-k8s-version-711547) Calling .Close
	I0131 03:25:02.911127 1465727 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:02.911137 1465727 main.go:141] libmachine: (old-k8s-version-711547) DBG | Closing plugin on server side
	I0131 03:25:02.911148 1465727 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:02.913178 1465727 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0131 03:24:58.853196 1466459 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:24:58.880016 1466459 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:24:58.909967 1466459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:24:58.910062 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:58.910111 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=embed-certs-958254 minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.271954 1466459 ops.go:34] apiserver oom_adj: -16
	I0131 03:24:59.310346 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:24:59.810934 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.310635 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:00.810402 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.310569 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:01.810714 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.310744 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.811360 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:03.311376 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:02.915069 1465727 addons.go:505] enable addons completed in 1.948706414s: enabled=[metrics-server storage-provisioner default-storageclass]
	I0131 03:24:59.856962 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:02.358614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:01.807470 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:04.306044 1465496 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:03.811326 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.310435 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.811033 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.310537 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:05.810596 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.311182 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:06.811200 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.310633 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:07.810619 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:08.310985 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:04.914636 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:07.415226 1465727 pod_ready.go:102] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.414866 1465727 pod_ready.go:92] pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.414894 1465727 pod_ready.go:81] duration metric: took 5.508246838s waiting for pod "coredns-5644d7b6d9-qq7jp" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.414904 1465727 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421152 1465727 pod_ready.go:92] pod "kube-proxy-wzft2" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:08.421177 1465727 pod_ready.go:81] duration metric: took 6.2664ms waiting for pod "kube-proxy-wzft2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:08.421191 1465727 pod_ready.go:38] duration metric: took 5.542837407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:08.421243 1465727 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:08.421313 1465727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:08.439228 1465727 api_server.go:72] duration metric: took 6.805346982s to wait for apiserver process to appear ...
	I0131 03:25:08.439258 1465727 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:08.439321 1465727 api_server.go:253] Checking apiserver healthz at https://192.168.50.63:8443/healthz ...
	I0131 03:25:08.445886 1465727 api_server.go:279] https://192.168.50.63:8443/healthz returned 200:
	ok
	I0131 03:25:08.446826 1465727 api_server.go:141] control plane version: v1.16.0
	I0131 03:25:08.446848 1465727 api_server.go:131] duration metric: took 7.582095ms to wait for apiserver health ...
	I0131 03:25:08.446856 1465727 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:08.450063 1465727 system_pods.go:59] 4 kube-system pods found
	I0131 03:25:08.450085 1465727 system_pods.go:61] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.450089 1465727 system_pods.go:61] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.450095 1465727 system_pods.go:61] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.450100 1465727 system_pods.go:61] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.450112 1465727 system_pods.go:74] duration metric: took 3.250434ms to wait for pod list to return data ...
	I0131 03:25:08.450121 1465727 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:08.452528 1465727 default_sa.go:45] found service account: "default"
	I0131 03:25:08.452546 1465727 default_sa.go:55] duration metric: took 2.420247ms for default service account to be created ...
	I0131 03:25:08.452553 1465727 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:08.457485 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.457514 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.457522 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.457533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.457540 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.457561 1465727 retry.go:31] will retry after 235.942588ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:04.856217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.856378 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:08.857457 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:06.800354 1465496 pod_ready.go:81] duration metric: took 4m0.001111271s waiting for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:06.800395 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-sjndx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0131 03:25:06.800424 1465496 pod_ready.go:38] duration metric: took 4m13.561240535s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:06.800474 1465496 kubeadm.go:640] restartCluster took 4m33.63933558s
	W0131 03:25:06.800585 1465496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0131 03:25:06.800626 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0131 03:25:08.811193 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.310464 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:09.810641 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.310665 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.810667 1466459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:10.995304 1466459 kubeadm.go:1088] duration metric: took 12.08531849s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:10.995343 1466459 kubeadm.go:406] StartCluster complete in 5m10.197561628s
	I0131 03:25:10.995368 1466459 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.995476 1466459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:10.997565 1466459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:10.998562 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:10.998861 1466459 config.go:182] Loaded profile config "embed-certs-958254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:25:10.999077 1466459 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:10.999167 1466459 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-958254"
	I0131 03:25:10.999184 1466459 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-958254"
	W0131 03:25:10.999192 1466459 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:10.999198 1466459 addons.go:69] Setting default-storageclass=true in profile "embed-certs-958254"
	I0131 03:25:10.999232 1466459 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-958254"
	I0131 03:25:10.999234 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:10.999598 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999631 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999673 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:10.999709 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:10.999738 1466459 addons.go:69] Setting metrics-server=true in profile "embed-certs-958254"
	I0131 03:25:10.999759 1466459 addons.go:234] Setting addon metrics-server=true in "embed-certs-958254"
	W0131 03:25:10.999767 1466459 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:10.999811 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.000160 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.000206 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.020646 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
	I0131 03:25:11.020716 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38579
	I0131 03:25:11.021273 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021412 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.021944 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.021972 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022107 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.022139 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.022542 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022540 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.022777 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.023181 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.023224 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.027202 1466459 addons.go:234] Setting addon default-storageclass=true in "embed-certs-958254"
	W0131 03:25:11.027230 1466459 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:11.027263 1466459 host.go:66] Checking if "embed-certs-958254" exists ...
	I0131 03:25:11.027702 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.027754 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.028003 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0131 03:25:11.029048 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.029571 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.029590 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.030209 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.030885 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.030931 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.042923 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0131 03:25:11.043492 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.044071 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.044086 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.044497 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.044800 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.046645 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.049444 1466459 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:11.051401 1466459 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.051441 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:11.051477 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.054476 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055341 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.055429 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0131 03:25:11.055608 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.055626 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.055808 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.056025 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.056244 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.056409 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.056920 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.056932 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.056989 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0131 03:25:11.057274 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.057428 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.057495 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.057847 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.057860 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.058662 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.059343 1466459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:11.059372 1466459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:11.059555 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.061701 1466459 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:11.063119 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:11.063138 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:11.063159 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.066101 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066408 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.066423 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.066762 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.066931 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.067054 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.067162 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.080881 1466459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0131 03:25:11.081403 1466459 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:11.081919 1466459 main.go:141] libmachine: Using API Version  1
	I0131 03:25:11.081931 1466459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:11.082442 1466459 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:11.082905 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetState
	I0131 03:25:11.085059 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .DriverName
	I0131 03:25:11.085518 1466459 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.085529 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:11.085545 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHHostname
	I0131 03:25:11.087954 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.088806 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHPort
	I0131 03:25:11.088858 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:06:de", ip: ""} in network mk-embed-certs-958254: {Iface:virbr3 ExpiryTime:2024-01-31 04:19:46 +0000 UTC Type:0 Mac:52:54:00:13:06:de Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:embed-certs-958254 Clientid:01:52:54:00:13:06:de}
	I0131 03:25:11.088868 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | domain embed-certs-958254 has defined IP address 192.168.39.232 and MAC address 52:54:00:13:06:de in network mk-embed-certs-958254
	I0131 03:25:11.089011 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHKeyPath
	I0131 03:25:11.089197 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .GetSSHUsername
	I0131 03:25:11.089609 1466459 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/embed-certs-958254/id_rsa Username:docker}
	I0131 03:25:11.229346 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:11.255093 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:11.255124 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:11.278162 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:11.314832 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:11.314860 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:11.374433 1466459 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.374463 1466459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:11.386186 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:11.431597 1466459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:11.617487 1466459 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-958254" context rescaled to 1 replicas
	I0131 03:25:11.617543 1466459 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:11.620222 1466459 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:11.621888 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:08.700194 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.700226 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.700232 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.700238 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.700243 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.700267 1465727 retry.go:31] will retry after 264.487072ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:08.970950 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:08.970994 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:08.971002 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:08.971013 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:08.971020 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:08.971113 1465727 retry.go:31] will retry after 296.249207ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.273631 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.273666 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.273675 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.273683 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.273696 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.273722 1465727 retry.go:31] will retry after 556.880076ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:09.835957 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:09.835985 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:09.835991 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:09.835997 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:09.836002 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:09.836020 1465727 retry.go:31] will retry after 541.012405ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:10.382622 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:10.382657 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:10.382665 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:10.382674 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:10.382681 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:10.382705 1465727 retry.go:31] will retry after 644.079363ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.036738 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.036777 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.036785 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.036796 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.036803 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.036825 1465727 retry.go:31] will retry after 832.963851ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:11.877526 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:11.877569 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:11.877578 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:11.877589 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:11.877597 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:11.877635 1465727 retry.go:31] will retry after 1.088792554s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:12.972355 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:12.972391 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:12.972397 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:12.972403 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:12.972408 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:12.972428 1465727 retry.go:31] will retry after 1.37018086s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:13.615542 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.337333269s)
	I0131 03:25:13.615599 1466459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.229373467s)
	I0131 03:25:13.615607 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615633 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.615632 1466459 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:13.615738 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.386359945s)
	I0131 03:25:13.615790 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.615807 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616101 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616104 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616109 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.616118 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616129 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616138 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616174 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616184 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.616194 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.616204 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.616351 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.616374 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.617924 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.618094 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.618057 1466459 main.go:141] libmachine: (embed-certs-958254) DBG | Closing plugin on server side
	I0131 03:25:13.783459 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.783487 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.783847 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.783872 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.966310 1466459 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.344369704s)
	I0131 03:25:13.966372 1466459 node_ready.go:35] waiting up to 6m0s for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.966498 1466459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.534826964s)
	I0131 03:25:13.966582 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.966602 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.966990 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967011 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967023 1466459 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:13.967033 1466459 main.go:141] libmachine: (embed-certs-958254) Calling .Close
	I0131 03:25:13.967278 1466459 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:13.967298 1466459 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:13.967310 1466459 addons.go:470] Verifying addon metrics-server=true in "embed-certs-958254"
	I0131 03:25:13.970159 1466459 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:10.858108 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.357207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:13.971527 1466459 addons.go:505] enable addons completed in 2.972461213s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:13.987533 1466459 node_ready.go:49] node "embed-certs-958254" has status "Ready":"True"
	I0131 03:25:13.987564 1466459 node_ready.go:38] duration metric: took 21.175558ms waiting for node "embed-certs-958254" to be "Ready" ...
	I0131 03:25:13.987577 1466459 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:13.998968 1466459 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505741 1466459 pod_ready.go:92] pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.505764 1466459 pod_ready.go:81] duration metric: took 1.506759288s waiting for pod "coredns-5dd5756b68-bnt4w" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.505775 1466459 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511011 1466459 pod_ready.go:92] pod "etcd-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.511037 1466459 pod_ready.go:81] duration metric: took 5.255671ms waiting for pod "etcd-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.511050 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515672 1466459 pod_ready.go:92] pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.515691 1466459 pod_ready.go:81] duration metric: took 4.632936ms waiting for pod "kube-apiserver-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.515699 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520372 1466459 pod_ready.go:92] pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.520388 1466459 pod_ready.go:81] duration metric: took 4.683171ms waiting for pod "kube-controller-manager-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.520397 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570633 1466459 pod_ready.go:92] pod "kube-proxy-2n2v5" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.570660 1466459 pod_ready.go:81] duration metric: took 50.257557ms waiting for pod "kube-proxy-2n2v5" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.570671 1466459 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970302 1466459 pod_ready.go:92] pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:15.970325 1466459 pod_ready.go:81] duration metric: took 399.647846ms waiting for pod "kube-scheduler-embed-certs-958254" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:15.970336 1466459 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:17.977775 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:14.349642 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:14.349679 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:14.349688 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:14.349698 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:14.349705 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:14.349726 1465727 retry.go:31] will retry after 1.923619057s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:16.279057 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:16.279090 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:16.279098 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:16.279108 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:16.279114 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:16.279137 1465727 retry.go:31] will retry after 2.073030623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:18.359162 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:18.359189 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:18.359195 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:18.359204 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:18.359209 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:18.359228 1465727 retry.go:31] will retry after 3.260033275s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:15.855521 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:17.855614 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:20.514278 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.713623849s)
	I0131 03:25:20.514394 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:20.527663 1465496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0131 03:25:20.536562 1465496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0131 03:25:20.545294 1465496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0131 03:25:20.545336 1465496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0131 03:25:20.598639 1465496 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0131 03:25:20.598867 1465496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0131 03:25:20.744229 1465496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0131 03:25:20.744371 1465496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0131 03:25:20.744509 1465496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0131 03:25:20.966346 1465496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0131 03:25:20.968311 1465496 out.go:204]   - Generating certificates and keys ...
	I0131 03:25:20.968451 1465496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0131 03:25:20.968540 1465496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0131 03:25:20.968652 1465496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0131 03:25:20.968758 1465496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0131 03:25:20.968846 1465496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0131 03:25:20.969285 1465496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0131 03:25:20.969711 1465496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0131 03:25:20.970103 1465496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0131 03:25:20.970500 1465496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0131 03:25:20.970914 1465496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0131 03:25:20.971238 1465496 kubeadm.go:322] [certs] Using the existing "sa" key
	I0131 03:25:20.971319 1465496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0131 03:25:21.137192 1465496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0131 03:25:21.403913 1465496 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0131 03:25:21.508809 1465496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0131 03:25:21.721878 1465496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0131 03:25:22.136726 1465496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0131 03:25:22.137207 1465496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0131 03:25:22.139977 1465496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0131 03:25:19.979362 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.477779 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.624554 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:21.624586 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:21.624592 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:21.624602 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:21.624607 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:21.624626 1465727 retry.go:31] will retry after 3.519201574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:19.856226 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:21.856396 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:23.857487 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:22.141783 1465496 out.go:204]   - Booting up control plane ...
	I0131 03:25:22.141884 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0131 03:25:22.141972 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0131 03:25:22.143031 1465496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0131 03:25:22.163448 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0131 03:25:22.163586 1465496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0131 03:25:22.163682 1465496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0131 03:25:22.287643 1465496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0131 03:25:24.479871 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:26.977625 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:25.149248 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:25.149277 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:25.149282 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:25.149290 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:25.149295 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:25.149314 1465727 retry.go:31] will retry after 5.238557946s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:25.857650 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:28.356862 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.793355 1465496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.506089 seconds
	I0131 03:25:30.811559 1465496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0131 03:25:30.830148 1465496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0131 03:25:31.367774 1465496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0131 03:25:31.368036 1465496 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-625812 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0131 03:25:31.887121 1465496 kubeadm.go:322] [bootstrap-token] Using token: t3t0h9.3huj9bl3w24ti869
	I0131 03:25:31.888852 1465496 out.go:204]   - Configuring RBAC rules ...
	I0131 03:25:31.888974 1465496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0131 03:25:31.893841 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0131 03:25:31.902695 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0131 03:25:31.908132 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0131 03:25:31.912738 1465496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0131 03:25:31.918089 1465496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0131 03:25:31.936690 1465496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0131 03:25:32.182433 1465496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0131 03:25:32.325953 1465496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0131 03:25:32.325981 1465496 kubeadm.go:322] 
	I0131 03:25:32.326114 1465496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0131 03:25:32.326143 1465496 kubeadm.go:322] 
	I0131 03:25:32.326244 1465496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0131 03:25:32.326272 1465496 kubeadm.go:322] 
	I0131 03:25:32.326332 1465496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0131 03:25:32.326416 1465496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0131 03:25:32.326500 1465496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0131 03:25:32.326511 1465496 kubeadm.go:322] 
	I0131 03:25:32.326588 1465496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0131 03:25:32.326598 1465496 kubeadm.go:322] 
	I0131 03:25:32.326664 1465496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0131 03:25:32.326674 1465496 kubeadm.go:322] 
	I0131 03:25:32.326743 1465496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0131 03:25:32.326853 1465496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0131 03:25:32.326947 1465496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0131 03:25:32.326958 1465496 kubeadm.go:322] 
	I0131 03:25:32.327052 1465496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0131 03:25:32.327151 1465496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0131 03:25:32.327160 1465496 kubeadm.go:322] 
	I0131 03:25:32.327264 1465496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327405 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 \
	I0131 03:25:32.327437 1465496 kubeadm.go:322] 	--control-plane 
	I0131 03:25:32.327447 1465496 kubeadm.go:322] 
	I0131 03:25:32.327553 1465496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0131 03:25:32.327564 1465496 kubeadm.go:322] 
	I0131 03:25:32.327667 1465496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t3t0h9.3huj9bl3w24ti869 \
	I0131 03:25:32.327800 1465496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:6f556e8c51ebaf3b62262a7349a34ed7396bd0990cae13c83e8543d45ea4cbb6 
	I0131 03:25:32.328638 1465496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0131 03:25:32.328815 1465496 cni.go:84] Creating CNI manager for ""
	I0131 03:25:32.328835 1465496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 03:25:32.330439 1465496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0131 03:25:28.984930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:31.480349 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:30.393923 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:30.393959 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:30.393968 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:30.393979 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:30.393985 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:30.394010 1465727 retry.go:31] will retry after 6.045479872s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:30.357227 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.358411 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:32.332529 1465496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0131 03:25:32.442284 1465496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0131 03:25:32.487754 1465496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0131 03:25:32.487829 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.487926 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424 minikube.k8s.io/name=no-preload-625812 minikube.k8s.io/updated_at=2024_01_31T03_25_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:32.706857 1465496 ops.go:34] apiserver oom_adj: -16
	I0131 03:25:32.707010 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.207717 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.707229 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.207690 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:34.707786 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:35.207781 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:33.980255 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.481025 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:36.444898 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:36.444932 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:36.444938 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:36.444946 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:36.444951 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:36.444993 1465727 retry.go:31] will retry after 6.676077992s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:34.855915 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:37.356945 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:35.707273 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.207173 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:36.707797 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.207697 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:37.707209 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.207989 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.707538 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.207693 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:39.707737 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:40.207439 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:38.980635 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:41.479377 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:43.125885 1465727 system_pods.go:86] 4 kube-system pods found
	I0131 03:25:43.125912 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:43.125917 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:43.125924 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:43.125928 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:43.125947 1465727 retry.go:31] will retry after 7.454064585s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:39.858377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:42.356966 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:40.707639 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.207708 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:41.707131 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.207700 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:42.707292 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.207810 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:43.707392 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.207490 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.707258 1465496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0131 03:25:44.883783 1465496 kubeadm.go:1088] duration metric: took 12.396028951s to wait for elevateKubeSystemPrivileges.
	I0131 03:25:44.883823 1465496 kubeadm.go:406] StartCluster complete in 5m11.777629477s
	I0131 03:25:44.883850 1465496 settings.go:142] acquiring lock: {Name:mk1ffbaf304386935ef7f355c7975acd375adb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.883949 1465496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:25:44.886319 1465496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/kubeconfig: {Name:mk06c7a41922db80d2c00cebbdee72bfe67d0d0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 03:25:44.886620 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0131 03:25:44.886727 1465496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0131 03:25:44.886814 1465496 addons.go:69] Setting storage-provisioner=true in profile "no-preload-625812"
	I0131 03:25:44.886837 1465496 addons.go:234] Setting addon storage-provisioner=true in "no-preload-625812"
	W0131 03:25:44.886849 1465496 addons.go:243] addon storage-provisioner should already be in state true
	I0131 03:25:44.886903 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.886934 1465496 config.go:182] Loaded profile config "no-preload-625812": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0131 03:25:44.886991 1465496 addons.go:69] Setting default-storageclass=true in profile "no-preload-625812"
	I0131 03:25:44.887007 1465496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-625812"
	I0131 03:25:44.887134 1465496 addons.go:69] Setting metrics-server=true in profile "no-preload-625812"
	I0131 03:25:44.887155 1465496 addons.go:234] Setting addon metrics-server=true in "no-preload-625812"
	W0131 03:25:44.887164 1465496 addons.go:243] addon metrics-server should already be in state true
	I0131 03:25:44.887216 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.887313 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887349 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887407 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887439 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.887611 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.887655 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.908876 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0131 03:25:44.908881 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0131 03:25:44.908879 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0131 03:25:44.909406 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909433 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909512 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.909925 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.909950 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910054 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910098 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910123 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.910148 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.910434 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910530 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910543 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.910740 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.911086 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911140 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.911185 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.911230 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.914635 1465496 addons.go:234] Setting addon default-storageclass=true in "no-preload-625812"
	W0131 03:25:44.914667 1465496 addons.go:243] addon default-storageclass should already be in state true
	I0131 03:25:44.914698 1465496 host.go:66] Checking if "no-preload-625812" exists ...
	I0131 03:25:44.915089 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.915135 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.931265 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0131 03:25:44.931296 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0131 03:25:44.931816 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.931859 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.932148 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932599 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932449 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.932677 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.932938 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933062 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.933655 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.933681 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.933726 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.933947 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934129 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.934262 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.934954 1465496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 03:25:44.935001 1465496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 03:25:44.936333 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.938601 1465496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0131 03:25:44.940239 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0131 03:25:44.940256 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0131 03:25:44.940273 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.938638 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.942306 1465496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0131 03:25:44.944873 1465496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:44.944894 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0131 03:25:44.944914 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.943649 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944987 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.945023 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.944263 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.945795 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.946072 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.946309 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.949097 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949522 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.949544 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.949710 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.949892 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.950040 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.950179 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:44.959691 1465496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0131 03:25:44.960146 1465496 main.go:141] libmachine: () Calling .GetVersion
	I0131 03:25:44.960696 1465496 main.go:141] libmachine: Using API Version  1
	I0131 03:25:44.960723 1465496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 03:25:44.961045 1465496 main.go:141] libmachine: () Calling .GetMachineName
	I0131 03:25:44.961279 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetState
	I0131 03:25:44.963057 1465496 main.go:141] libmachine: (no-preload-625812) Calling .DriverName
	I0131 03:25:44.963321 1465496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:44.963342 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0131 03:25:44.963363 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHHostname
	I0131 03:25:44.966336 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.966808 1465496 main.go:141] libmachine: (no-preload-625812) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1b:69", ip: ""} in network mk-no-preload-625812: {Iface:virbr4 ExpiryTime:2024-01-31 04:20:06 +0000 UTC Type:0 Mac:52:54:00:11:1b:69 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:no-preload-625812 Clientid:01:52:54:00:11:1b:69}
	I0131 03:25:44.966845 1465496 main.go:141] libmachine: (no-preload-625812) DBG | domain no-preload-625812 has defined IP address 192.168.72.23 and MAC address 52:54:00:11:1b:69 in network mk-no-preload-625812
	I0131 03:25:44.967006 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHPort
	I0131 03:25:44.967205 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHKeyPath
	I0131 03:25:44.967329 1465496 main.go:141] libmachine: (no-preload-625812) Calling .GetSSHUsername
	I0131 03:25:44.967472 1465496 sshutil.go:53] new ssh client: &{IP:192.168.72.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/no-preload-625812/id_rsa Username:docker}
	I0131 03:25:45.114858 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0131 03:25:45.135760 1465496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0131 03:25:45.209439 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0131 03:25:45.209466 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0131 03:25:45.219146 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0131 03:25:45.287400 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0131 03:25:45.287430 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0131 03:25:45.380888 1465496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:45.380917 1465496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0131 03:25:45.462341 1465496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-625812" context rescaled to 1 replicas
	I0131 03:25:45.462403 1465496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0131 03:25:45.463834 1465496 out.go:177] * Verifying Kubernetes components...
	I0131 03:25:45.465542 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:45.515980 1465496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0131 03:25:46.322228 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.20732453s)
	I0131 03:25:46.322281 1465496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.186472094s)
	I0131 03:25:46.322327 1465496 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0131 03:25:46.322296 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322366 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322413 1465496 node_ready.go:35] waiting up to 6m0s for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.322369 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.103177926s)
	I0131 03:25:46.322663 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322676 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.322757 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.322760 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.322773 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.322783 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.322791 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323137 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323156 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323167 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.323176 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.323177 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323257 1465496 main.go:141] libmachine: (no-preload-625812) DBG | Closing plugin on server side
	I0131 03:25:46.323281 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323295 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.323733 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.323755 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.329699 1465496 node_ready.go:49] node "no-preload-625812" has status "Ready":"True"
	I0131 03:25:46.329719 1465496 node_ready.go:38] duration metric: took 7.243031ms waiting for node "no-preload-625812" to be "Ready" ...
	I0131 03:25:46.329728 1465496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:46.345672 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.345703 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.345984 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.346000 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.348953 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:46.699387 1465496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183353653s)
	I0131 03:25:46.699456 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699474 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.699910 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.699932 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.699945 1465496 main.go:141] libmachine: Making call to close driver server
	I0131 03:25:46.699957 1465496 main.go:141] libmachine: (no-preload-625812) Calling .Close
	I0131 03:25:46.700251 1465496 main.go:141] libmachine: Successfully made call to close driver server
	I0131 03:25:46.700272 1465496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0131 03:25:46.700285 1465496 addons.go:470] Verifying addon metrics-server=true in "no-preload-625812"
	I0131 03:25:46.702053 1465496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0131 03:25:43.980700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.478141 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:44.855513 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.857198 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:49.356657 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:46.703328 1465496 addons.go:505] enable addons completed in 1.816619953s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0131 03:25:46.865293 1465496 pod_ready.go:97] error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865325 1465496 pod_ready.go:81] duration metric: took 516.342792ms waiting for pod "coredns-76f75df574-6wqbt" in "kube-system" namespace to be "Ready" ...
	E0131 03:25:46.865336 1465496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-6wqbt" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-6wqbt" not found
	I0131 03:25:46.865343 1465496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872316 1465496 pod_ready.go:92] pod "coredns-76f75df574-hvxjf" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.872345 1465496 pod_ready.go:81] duration metric: took 1.006996095s waiting for pod "coredns-76f75df574-hvxjf" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.872355 1465496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878192 1465496 pod_ready.go:92] pod "etcd-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.878215 1465496 pod_ready.go:81] duration metric: took 5.854656ms waiting for pod "etcd-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.878223 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883120 1465496 pod_ready.go:92] pod "kube-apiserver-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.883139 1465496 pod_ready.go:81] duration metric: took 4.910099ms waiting for pod "kube-apiserver-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.883147 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889909 1465496 pod_ready.go:92] pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:47.889934 1465496 pod_ready.go:81] duration metric: took 6.780796ms waiting for pod "kube-controller-manager-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:47.889944 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926206 1465496 pod_ready.go:92] pod "kube-proxy-pkvj6" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:48.926230 1465496 pod_ready.go:81] duration metric: took 1.036280111s waiting for pod "kube-proxy-pkvj6" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:48.926239 1465496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325588 1465496 pod_ready.go:92] pod "kube-scheduler-no-preload-625812" in "kube-system" namespace has status "Ready":"True"
	I0131 03:25:49.325613 1465496 pod_ready.go:81] duration metric: took 399.368272ms waiting for pod "kube-scheduler-no-preload-625812" in "kube-system" namespace to be "Ready" ...
	I0131 03:25:49.325623 1465496 pod_ready.go:38] duration metric: took 2.995885901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:25:49.325640 1465496 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:25:49.325693 1465496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:25:49.339591 1465496 api_server.go:72] duration metric: took 3.877145066s to wait for apiserver process to appear ...
	I0131 03:25:49.339624 1465496 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:25:49.339652 1465496 api_server.go:253] Checking apiserver healthz at https://192.168.72.23:8443/healthz ...
	I0131 03:25:49.345130 1465496 api_server.go:279] https://192.168.72.23:8443/healthz returned 200:
	ok
	I0131 03:25:49.346350 1465496 api_server.go:141] control plane version: v1.29.0-rc.2
	I0131 03:25:49.346371 1465496 api_server.go:131] duration metric: took 6.739501ms to wait for apiserver health ...
	I0131 03:25:49.346379 1465496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:25:49.529845 1465496 system_pods.go:59] 8 kube-system pods found
	I0131 03:25:49.529876 1465496 system_pods.go:61] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.529881 1465496 system_pods.go:61] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.529885 1465496 system_pods.go:61] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.529890 1465496 system_pods.go:61] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.529894 1465496 system_pods.go:61] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.529898 1465496 system_pods.go:61] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.529905 1465496 system_pods.go:61] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.529909 1465496 system_pods.go:61] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.529918 1465496 system_pods.go:74] duration metric: took 183.532223ms to wait for pod list to return data ...
	I0131 03:25:49.529926 1465496 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:25:49.726239 1465496 default_sa.go:45] found service account: "default"
	I0131 03:25:49.726266 1465496 default_sa.go:55] duration metric: took 196.333831ms for default service account to be created ...
	I0131 03:25:49.726276 1465496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:25:49.933151 1465496 system_pods.go:86] 8 kube-system pods found
	I0131 03:25:49.933188 1465496 system_pods.go:89] "coredns-76f75df574-hvxjf" [16747666-47f2-4cf0-85d0-0cffecb9c7a6] Running
	I0131 03:25:49.933198 1465496 system_pods.go:89] "etcd-no-preload-625812" [a9989fbf-46a1-4031-ac5e-d9af002b55f0] Running
	I0131 03:25:49.933205 1465496 system_pods.go:89] "kube-apiserver-no-preload-625812" [a7777761-fc53-4002-b131-d19b5230d3b4] Running
	I0131 03:25:49.933212 1465496 system_pods.go:89] "kube-controller-manager-no-preload-625812" [ba455760-1d59-4a47-b7eb-c571bc865092] Running
	I0131 03:25:49.933220 1465496 system_pods.go:89] "kube-proxy-pkvj6" [83805bb8-284a-4f67-b53a-c19bf5d51b40] Running
	I0131 03:25:49.933228 1465496 system_pods.go:89] "kube-scheduler-no-preload-625812" [c78c2a66-3451-440a-9234-a6473f7b401b] Running
	I0131 03:25:49.933243 1465496 system_pods.go:89] "metrics-server-57f55c9bc5-vjnfp" [7227d151-55ff-45b0-a85a-090f5d6ff6f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:49.933254 1465496 system_pods.go:89] "storage-provisioner" [5eb6c1a2-9c1e-442c-abb3-6e993cb70875] Running
	I0131 03:25:49.933268 1465496 system_pods.go:126] duration metric: took 206.984671ms to wait for k8s-apps to be running ...
	I0131 03:25:49.933282 1465496 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:25:49.933345 1465496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:25:49.949256 1465496 system_svc.go:56] duration metric: took 15.963316ms WaitForService to wait for kubelet.
	I0131 03:25:49.949290 1465496 kubeadm.go:581] duration metric: took 4.486852525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:25:49.949316 1465496 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:25:50.126992 1465496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:25:50.127032 1465496 node_conditions.go:123] node cpu capacity is 2
	I0131 03:25:50.127044 1465496 node_conditions.go:105] duration metric: took 177.723252ms to run NodePressure ...
	I0131 03:25:50.127056 1465496 start.go:228] waiting for startup goroutines ...
	I0131 03:25:50.127063 1465496 start.go:233] waiting for cluster config update ...
	I0131 03:25:50.127072 1465496 start.go:242] writing updated cluster config ...
	I0131 03:25:50.127343 1465496 ssh_runner.go:195] Run: rm -f paused
	I0131 03:25:50.184224 1465496 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0131 03:25:50.186267 1465496 out.go:177] * Done! kubectl is now configured to use "no-preload-625812" cluster and "default" namespace by default
	I0131 03:25:48.481166 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.977129 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:52.977622 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:50.586089 1465727 system_pods.go:86] 6 kube-system pods found
	I0131 03:25:50.586129 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:25:50.586138 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Pending
	I0131 03:25:50.586144 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:25:50.586151 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Pending
	I0131 03:25:50.586172 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:25:50.586182 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:25:50.586211 1465727 retry.go:31] will retry after 13.55623924s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0131 03:25:51.856116 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:53.856661 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:55.480823 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:57.978681 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:56.355895 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:58.356767 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:25:59.981147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.479364 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:00.856081 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:02.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.977218 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:06.978863 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:04.148474 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:04.148505 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:04.148511 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Pending
	I0131 03:26:04.148516 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:04.148520 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:04.148524 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:04.148528 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:04.148533 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:04.148537 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:04.148555 1465727 retry.go:31] will retry after 14.271857783s: missing components: etcd
	I0131 03:26:05.355042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:07.358366 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:08.981159 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:10.982761 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:09.856454 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:12.357096 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:13.478470 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:15.977827 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.426593 1465727 system_pods.go:86] 8 kube-system pods found
	I0131 03:26:18.426625 1465727 system_pods.go:89] "coredns-5644d7b6d9-qq7jp" [cbb4201f-8bce-408f-a16d-57d8f91c8304] Running
	I0131 03:26:18.426634 1465727 system_pods.go:89] "etcd-old-k8s-version-711547" [1c110d2d-6bae-413b-ba85-f7f4728bbf6d] Running
	I0131 03:26:18.426641 1465727 system_pods.go:89] "kube-apiserver-old-k8s-version-711547" [92774882-a32e-4277-8c20-b24a56f16663] Running
	I0131 03:26:18.426647 1465727 system_pods.go:89] "kube-controller-manager-old-k8s-version-711547" [e13fa950-95f7-4553-a425-b1641e2053ed] Running
	I0131 03:26:18.426652 1465727 system_pods.go:89] "kube-proxy-wzft2" [31a2844e-22c6-4184-9f2b-5030a29dc0ec] Running
	I0131 03:26:18.426657 1465727 system_pods.go:89] "kube-scheduler-old-k8s-version-711547" [222a6f32-f21c-4836-8d88-f9057d762e54] Running
	I0131 03:26:18.426667 1465727 system_pods.go:89] "metrics-server-74d5856cc6-sgw75" [e66d5152-4065-4916-8bfa-1b78adc5c7a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:26:18.426676 1465727 system_pods.go:89] "storage-provisioner" [b345c5ea-80fe-48c2-9a7a-f10b0cd4d482] Running
	I0131 03:26:18.426690 1465727 system_pods.go:126] duration metric: took 1m9.974130417s to wait for k8s-apps to be running ...
	I0131 03:26:18.426704 1465727 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:26:18.426762 1465727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:26:18.443853 1465727 system_svc.go:56] duration metric: took 17.14056ms WaitForService to wait for kubelet.
	I0131 03:26:18.443902 1465727 kubeadm.go:581] duration metric: took 1m16.810021481s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:26:18.443930 1465727 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:26:18.447269 1465727 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:26:18.447298 1465727 node_conditions.go:123] node cpu capacity is 2
	I0131 03:26:18.447311 1465727 node_conditions.go:105] duration metric: took 3.375419ms to run NodePressure ...
	I0131 03:26:18.447325 1465727 start.go:228] waiting for startup goroutines ...
	I0131 03:26:18.447333 1465727 start.go:233] waiting for cluster config update ...
	I0131 03:26:18.447348 1465727 start.go:242] writing updated cluster config ...
	I0131 03:26:18.447643 1465727 ssh_runner.go:195] Run: rm -f paused
	I0131 03:26:18.500327 1465727 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0131 03:26:18.502092 1465727 out.go:177] 
	W0131 03:26:18.503693 1465727 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0131 03:26:18.505132 1465727 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0131 03:26:18.506889 1465727 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-711547" cluster and "default" namespace by default
	I0131 03:26:14.856448 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:17.357112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:18.478401 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:20.977208 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.978473 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:19.857118 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:22.358299 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:25.478227 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:27.978500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:24.855341 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:26.855774 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:28.856168 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:30.477275 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:32.478896 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:31.357512 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:33.363164 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:34.978058 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:37.481411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:35.856084 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:38.358589 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:39.976914 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:41.979388 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:40.856122 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:42.856950 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:44.477345 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:46.478466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:45.356312 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:47.855178 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:48.978543 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.477641 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:49.856079 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:51.856377 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:54.358161 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:53.477989 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:55.977887 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:56.855581 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.856493 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:26:58.477589 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:00.478116 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:02.978262 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:01.354961 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:03.355994 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.478139 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.977913 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:05.356248 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:07.855596 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:10.479147 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:12.977533 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:09.856222 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:11.857068 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.356693 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:14.978967 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:17.477119 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:16.854825 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:18.855620 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:19.477877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:21.482081 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:20.856333 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.355603 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:23.978877 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:26.477700 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:25.356085 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:27.356888 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:28.478497 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:30.977469 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:32.977663 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:29.854905 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:31.855752 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:33.855976 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.480505 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.977880 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:35.857042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:37.862112 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:39.977961 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.478948 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:40.355787 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:42.358217 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.977950 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.478570 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:44.855551 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:47.355853 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.977939 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:51.978267 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:49.855671 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:52.357889 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:53.979331 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:56.477411 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:54.856642 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:57.357372 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:58.478175 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:00.977929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.978272 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:27:59.856232 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:02.356390 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:05.477602 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:07.478168 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:04.855423 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:06.859565 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.355517 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:09.977639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.977754 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:11.855199 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:13.856260 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:14.477406 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:16.478372 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:15.856582 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:17.861124 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:18.980067 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:21.478833 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:20.356883 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:22.358007 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:23.979040 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.478463 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:24.855207 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:26.855709 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.866306 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:28.978973 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.477340 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:31.355706 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.855699 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:33.477521 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:35.478390 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:37.977270 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:36.358244 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:38.855704 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:39.979930 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.477381 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:40.856442 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:42.857041 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:44.477500 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:46.478446 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:45.356039 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:47.855042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:48.977241 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:50.977925 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:52.978323 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:49.857897 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:51.857941 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:54.357042 1465898 pod_ready.go:102] pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.477690 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:57.477927 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:55.855298 1465898 pod_ready.go:81] duration metric: took 4m0.007008152s waiting for pod "metrics-server-57f55c9bc5-k4ht8" in "kube-system" namespace to be "Ready" ...
	E0131 03:28:55.855323 1465898 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:28:55.855330 1465898 pod_ready.go:38] duration metric: took 4m2.377385486s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:28:55.855346 1465898 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:28:55.855399 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:55.855533 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:55.913399 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:55.913425 1465898 cri.go:89] found id: ""
	I0131 03:28:55.913445 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:55.913515 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.918308 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:55.918379 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:55.964846 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:55.964872 1465898 cri.go:89] found id: ""
	I0131 03:28:55.964881 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:55.964942 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:55.969090 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:55.969158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:56.012247 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:56.012271 1465898 cri.go:89] found id: ""
	I0131 03:28:56.012279 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:56.012337 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.016457 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:56.016535 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:56.053842 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.053867 1465898 cri.go:89] found id: ""
	I0131 03:28:56.053877 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:56.053926 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.057807 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:56.057889 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:28:56.097431 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.097465 1465898 cri.go:89] found id: ""
	I0131 03:28:56.097477 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:28:56.097549 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.101354 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:28:56.101420 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:28:56.136696 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.136725 1465898 cri.go:89] found id: ""
	I0131 03:28:56.136735 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:28:56.136800 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.140584 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:28:56.140661 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:28:56.177606 1465898 cri.go:89] found id: ""
	I0131 03:28:56.177639 1465898 logs.go:284] 0 containers: []
	W0131 03:28:56.177650 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:28:56.177658 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:28:56.177779 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:28:56.215795 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.215824 1465898 cri.go:89] found id: ""
	I0131 03:28:56.215835 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:28:56.215909 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:56.220297 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:28:56.220324 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:28:56.319500 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:28:56.319544 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:28:56.355731 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:28:56.355767 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:28:56.410301 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:28:56.410341 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:28:56.858474 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:28:56.858531 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:28:56.903299 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:28:56.903337 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:56.961020 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:28:56.961070 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:28:56.998347 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:28:56.998382 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:28:57.011562 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:28:57.011594 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:28:57.152899 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:28:57.152937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:57.201041 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:28:57.201084 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:57.247253 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:28:57.247289 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.478758 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:01.977644 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:28:59.786669 1465898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:28:59.804046 1465898 api_server.go:72] duration metric: took 4m8.808083047s to wait for apiserver process to appear ...
	I0131 03:28:59.804079 1465898 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:28:59.804131 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:28:59.804249 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:28:59.846418 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:28:59.846440 1465898 cri.go:89] found id: ""
	I0131 03:28:59.846448 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:28:59.846516 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.850526 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:28:59.850588 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:28:59.892343 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:28:59.892373 1465898 cri.go:89] found id: ""
	I0131 03:28:59.892382 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:28:59.892449 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.896483 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:28:59.896561 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:28:59.933901 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:28:59.933934 1465898 cri.go:89] found id: ""
	I0131 03:28:59.933945 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:28:59.934012 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.938150 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:28:59.938232 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:28:59.980328 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:28:59.980354 1465898 cri.go:89] found id: ""
	I0131 03:28:59.980363 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:28:59.980418 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:28:59.984866 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:28:59.984943 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:00.029663 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.029695 1465898 cri.go:89] found id: ""
	I0131 03:29:00.029705 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:00.029753 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.034759 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:00.034827 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:00.084320 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.084347 1465898 cri.go:89] found id: ""
	I0131 03:29:00.084355 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:00.084431 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.088744 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:00.088819 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:00.133028 1465898 cri.go:89] found id: ""
	I0131 03:29:00.133062 1465898 logs.go:284] 0 containers: []
	W0131 03:29:00.133072 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:00.133080 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:00.133145 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:00.175187 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.175219 1465898 cri.go:89] found id: ""
	I0131 03:29:00.175229 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:00.175306 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:00.179387 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:00.179420 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:00.233630 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:00.233676 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:00.271692 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:00.271735 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:00.655131 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:00.655177 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:00.757571 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:00.757628 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:00.805958 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:00.806000 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:00.842604 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:00.842650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:00.888064 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:00.888103 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:00.939276 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:00.939331 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:00.981965 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:00.982006 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:00.996237 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:00.996265 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:01.129715 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:01.129754 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.677131 1465898 api_server.go:253] Checking apiserver healthz at https://192.168.61.123:8444/healthz ...
	I0131 03:29:03.684945 1465898 api_server.go:279] https://192.168.61.123:8444/healthz returned 200:
	ok
	I0131 03:29:03.687117 1465898 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:03.687142 1465898 api_server.go:131] duration metric: took 3.883056117s to wait for apiserver health ...
	I0131 03:29:03.687171 1465898 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:03.687245 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:03.687303 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:03.727289 1465898 cri.go:89] found id: "3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:03.727314 1465898 cri.go:89] found id: ""
	I0131 03:29:03.727322 1465898 logs.go:284] 1 containers: [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd]
	I0131 03:29:03.727375 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.731095 1465898 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:03.731158 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:03.779103 1465898 cri.go:89] found id: "bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:03.779134 1465898 cri.go:89] found id: ""
	I0131 03:29:03.779144 1465898 logs.go:284] 1 containers: [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9]
	I0131 03:29:03.779223 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.783387 1465898 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:03.783459 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:03.821342 1465898 cri.go:89] found id: "8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:03.821368 1465898 cri.go:89] found id: ""
	I0131 03:29:03.821376 1465898 logs.go:284] 1 containers: [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9]
	I0131 03:29:03.821438 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.825907 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:03.825990 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:03.863826 1465898 cri.go:89] found id: "bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:03.863853 1465898 cri.go:89] found id: ""
	I0131 03:29:03.863867 1465898 logs.go:284] 1 containers: [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e]
	I0131 03:29:03.863919 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.868093 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:03.868163 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:03.908653 1465898 cri.go:89] found id: "fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:03.908681 1465898 cri.go:89] found id: ""
	I0131 03:29:03.908690 1465898 logs.go:284] 1 containers: [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5]
	I0131 03:29:03.908750 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.912998 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:03.913078 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:03.961104 1465898 cri.go:89] found id: "a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:03.961131 1465898 cri.go:89] found id: ""
	I0131 03:29:03.961139 1465898 logs.go:284] 1 containers: [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4]
	I0131 03:29:03.961212 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:03.965913 1465898 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:03.965996 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:04.003791 1465898 cri.go:89] found id: ""
	I0131 03:29:04.003824 1465898 logs.go:284] 0 containers: []
	W0131 03:29:04.003833 1465898 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:04.003840 1465898 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:04.003907 1465898 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:04.040736 1465898 cri.go:89] found id: "7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.040773 1465898 cri.go:89] found id: ""
	I0131 03:29:04.040785 1465898 logs.go:284] 1 containers: [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4]
	I0131 03:29:04.040852 1465898 ssh_runner.go:195] Run: which crictl
	I0131 03:29:04.045013 1465898 logs.go:123] Gathering logs for container status ...
	I0131 03:29:04.045042 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:04.091615 1465898 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:04.091650 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:04.204602 1465898 logs.go:123] Gathering logs for kube-scheduler [bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e] ...
	I0131 03:29:04.204638 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb28486f5d752454f2144e5d209578d6fd3ab653049e439fd63903ca2f51fb5e"
	I0131 03:29:04.257510 1465898 logs.go:123] Gathering logs for kube-proxy [fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5] ...
	I0131 03:29:04.257548 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc0700086e958d4b589f92f219fe724dfbd3e388da086e537e2f0c8e7fb091b5"
	I0131 03:29:04.296585 1465898 logs.go:123] Gathering logs for kube-controller-manager [a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4] ...
	I0131 03:29:04.296619 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80c35ecce811be5560a45c6a01c0bafcc0a0abf5486bb33fccfb96154efc1c4"
	I0131 03:29:04.360438 1465898 logs.go:123] Gathering logs for storage-provisioner [7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4] ...
	I0131 03:29:04.360480 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cd76e5e503bf459dceeab5e52b16f73825c99afdef7de560d9f35befc482bf4"
	I0131 03:29:04.398825 1465898 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:04.398858 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:04.711357 1465898 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:04.711403 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:04.804895 1465898 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:04.804940 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:04.819394 1465898 logs.go:123] Gathering logs for kube-apiserver [3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd] ...
	I0131 03:29:04.819426 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3feac299b4d0ab89b62624a7d67ba3606b88ad4c9be90c43514467e9c9c9e4cd"
	I0131 03:29:04.869897 1465898 logs.go:123] Gathering logs for etcd [bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9] ...
	I0131 03:29:04.869937 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc73770fd85b8ce4cf630e62a8687a4a940bd8b493796e859710f860194118e9"
	I0131 03:29:04.918002 1465898 logs.go:123] Gathering logs for coredns [8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9] ...
	I0131 03:29:04.918040 1465898 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dc2215c9bd1d4b347df08d729d4dfda720533aef79dbb92759407c280a3ffe9"
	I0131 03:29:07.471428 1465898 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:07.471466 1465898 system_pods.go:61] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.471474 1465898 system_pods.go:61] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.471481 1465898 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.471488 1465898 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.471495 1465898 system_pods.go:61] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.471501 1465898 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.471516 1465898 system_pods.go:61] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.471524 1465898 system_pods.go:61] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.471535 1465898 system_pods.go:74] duration metric: took 3.784356035s to wait for pod list to return data ...
	I0131 03:29:07.471552 1465898 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:07.474519 1465898 default_sa.go:45] found service account: "default"
	I0131 03:29:07.474547 1465898 default_sa.go:55] duration metric: took 2.986529ms for default service account to be created ...
	I0131 03:29:07.474559 1465898 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:07.480778 1465898 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:07.480805 1465898 system_pods.go:89] "coredns-5dd5756b68-5gdks" [a35e6baf-1ad9-4df7-bbb1-a2443f8c658f] Running
	I0131 03:29:07.480810 1465898 system_pods.go:89] "etcd-default-k8s-diff-port-873005" [fb2e36a7-d3dc-4943-9fa9-ade175f84c77] Running
	I0131 03:29:07.480816 1465898 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-873005" [9ad16713-c913-48a7-9045-d804bc6437da] Running
	I0131 03:29:07.480823 1465898 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-873005" [efbfcad6-fbe9-4d18-913e-c322e7481e10] Running
	I0131 03:29:07.480827 1465898 system_pods.go:89] "kube-proxy-blwwq" [190c406e-eb21-4420-bcec-ad218ec4b760] Running
	I0131 03:29:07.480831 1465898 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-873005" [5b6b196e-8538-47bb-abe9-b88588fdd2d9] Running
	I0131 03:29:07.480837 1465898 system_pods.go:89] "metrics-server-57f55c9bc5-k4ht8" [604feb17-6aaf-40e8-a6e6-01c899530151] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:07.480842 1465898 system_pods.go:89] "storage-provisioner" [db68da18-b403-43a6-abdd-f3354e633a5c] Running
	I0131 03:29:07.480850 1465898 system_pods.go:126] duration metric: took 6.285456ms to wait for k8s-apps to be running ...
	I0131 03:29:07.480856 1465898 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:07.480905 1465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:07.497612 1465898 system_svc.go:56] duration metric: took 16.74594ms WaitForService to wait for kubelet.
	I0131 03:29:07.497643 1465898 kubeadm.go:581] duration metric: took 4m16.501686281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:07.497678 1465898 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:07.501680 1465898 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:07.501732 1465898 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:07.501748 1465898 node_conditions.go:105] duration metric: took 4.063716ms to run NodePressure ...
	I0131 03:29:07.501763 1465898 start.go:228] waiting for startup goroutines ...
	I0131 03:29:07.501772 1465898 start.go:233] waiting for cluster config update ...
	I0131 03:29:07.501818 1465898 start.go:242] writing updated cluster config ...
	I0131 03:29:07.502234 1465898 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:07.559193 1465898 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:07.561350 1465898 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-873005" cluster and "default" namespace by default
	I0131 03:29:03.978465 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:06.477545 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:08.480466 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:10.978639 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:13.478152 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978929 1466459 pod_ready.go:102] pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace has status "Ready":"False"
	I0131 03:29:15.978967 1466459 pod_ready.go:81] duration metric: took 4m0.008624682s waiting for pod "metrics-server-57f55c9bc5-dj7l2" in "kube-system" namespace to be "Ready" ...
	E0131 03:29:15.978976 1466459 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0131 03:29:15.978984 1466459 pod_ready.go:38] duration metric: took 4m1.99139457s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0131 03:29:15.978999 1466459 api_server.go:52] waiting for apiserver process to appear ...
	I0131 03:29:15.979026 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:15.979074 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:16.041735 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:16.041774 1466459 cri.go:89] found id: ""
	I0131 03:29:16.041784 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:16.041845 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.046910 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:16.046982 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:16.085124 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.085156 1466459 cri.go:89] found id: ""
	I0131 03:29:16.085166 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:16.085226 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.089189 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:16.089274 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:16.129255 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.129286 1466459 cri.go:89] found id: ""
	I0131 03:29:16.129296 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:16.129352 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.133364 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:16.133451 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:16.170605 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.170634 1466459 cri.go:89] found id: ""
	I0131 03:29:16.170643 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:16.170704 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.175117 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:16.175197 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:16.210139 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:16.210169 1466459 cri.go:89] found id: ""
	I0131 03:29:16.210179 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:16.210248 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.214877 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:16.214960 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:16.257772 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.257797 1466459 cri.go:89] found id: ""
	I0131 03:29:16.257807 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:16.257878 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.262276 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:16.262341 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:16.304203 1466459 cri.go:89] found id: ""
	I0131 03:29:16.304233 1466459 logs.go:284] 0 containers: []
	W0131 03:29:16.304241 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:16.304248 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:16.304325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:16.343337 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:16.343360 1466459 cri.go:89] found id: ""
	I0131 03:29:16.343368 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:16.343423 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:16.347098 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:16.347129 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:16.389501 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:16.389544 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:16.426153 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:16.426196 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:16.476241 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:16.476281 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:16.533086 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:16.533131 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:16.575664 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:16.575701 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:16.675622 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:16.675669 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:16.690251 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:16.690285 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:16.828714 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:16.828748 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:17.253277 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:17.253335 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:17.304285 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:17.304323 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:17.340432 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:17.340465 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:19.889056 1466459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 03:29:19.904225 1466459 api_server.go:72] duration metric: took 4m8.286630357s to wait for apiserver process to appear ...
	I0131 03:29:19.904258 1466459 api_server.go:88] waiting for apiserver healthz status ...
	I0131 03:29:19.904302 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:19.904375 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:19.939116 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:19.939147 1466459 cri.go:89] found id: ""
	I0131 03:29:19.939159 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:19.939225 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.943273 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:19.943351 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:19.979411 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:19.979436 1466459 cri.go:89] found id: ""
	I0131 03:29:19.979445 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:19.979512 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:19.984054 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:19.984148 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:20.022949 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.022978 1466459 cri.go:89] found id: ""
	I0131 03:29:20.022988 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:20.023046 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.027252 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:20.027325 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:20.064215 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.064238 1466459 cri.go:89] found id: ""
	I0131 03:29:20.064246 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:20.064303 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.068589 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:20.068687 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:20.106750 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.106781 1466459 cri.go:89] found id: ""
	I0131 03:29:20.106792 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:20.106854 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.111267 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:20.111342 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:20.147750 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.147789 1466459 cri.go:89] found id: ""
	I0131 03:29:20.147801 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:20.147873 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.152882 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:20.152950 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:20.191082 1466459 cri.go:89] found id: ""
	I0131 03:29:20.191121 1466459 logs.go:284] 0 containers: []
	W0131 03:29:20.191133 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:20.191143 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:20.191226 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:20.226346 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.226373 1466459 cri.go:89] found id: ""
	I0131 03:29:20.226382 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:20.226436 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:20.230561 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:20.230607 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:20.596919 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:20.596968 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:20.691142 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:20.691184 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:20.750659 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:20.750692 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:20.816839 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:20.816882 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:20.852691 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:20.852730 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:20.909788 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:20.909828 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:20.950311 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:20.950360 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:20.985515 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:20.985554 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:21.030306 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:21.030350 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:21.043130 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:21.043172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:21.160716 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:21.160763 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.706550 1466459 api_server.go:253] Checking apiserver healthz at https://192.168.39.232:8443/healthz ...
	I0131 03:29:23.711528 1466459 api_server.go:279] https://192.168.39.232:8443/healthz returned 200:
	ok
	I0131 03:29:23.713998 1466459 api_server.go:141] control plane version: v1.28.4
	I0131 03:29:23.714027 1466459 api_server.go:131] duration metric: took 3.809760557s to wait for apiserver health ...
	I0131 03:29:23.714039 1466459 system_pods.go:43] waiting for kube-system pods to appear ...
	I0131 03:29:23.714070 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0131 03:29:23.714142 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0131 03:29:23.754990 1466459 cri.go:89] found id: "60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:23.755017 1466459 cri.go:89] found id: ""
	I0131 03:29:23.755028 1466459 logs.go:284] 1 containers: [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b]
	I0131 03:29:23.755091 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.759151 1466459 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0131 03:29:23.759224 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0131 03:29:23.798410 1466459 cri.go:89] found id: "dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:23.798448 1466459 cri.go:89] found id: ""
	I0131 03:29:23.798459 1466459 logs.go:284] 1 containers: [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666]
	I0131 03:29:23.798541 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.802512 1466459 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0131 03:29:23.802588 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0131 03:29:23.840962 1466459 cri.go:89] found id: "6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:23.840991 1466459 cri.go:89] found id: ""
	I0131 03:29:23.841001 1466459 logs.go:284] 1 containers: [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71]
	I0131 03:29:23.841073 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.844943 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0131 03:29:23.845021 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0131 03:29:23.882314 1466459 cri.go:89] found id: "053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:23.882355 1466459 cri.go:89] found id: ""
	I0131 03:29:23.882368 1466459 logs.go:284] 1 containers: [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c]
	I0131 03:29:23.882438 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.886227 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0131 03:29:23.886292 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0131 03:29:23.925001 1466459 cri.go:89] found id: "282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:23.925031 1466459 cri.go:89] found id: ""
	I0131 03:29:23.925042 1466459 logs.go:284] 1 containers: [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a]
	I0131 03:29:23.925100 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.929531 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0131 03:29:23.929601 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0131 03:29:23.969068 1466459 cri.go:89] found id: "4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:23.969098 1466459 cri.go:89] found id: ""
	I0131 03:29:23.969108 1466459 logs.go:284] 1 containers: [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b]
	I0131 03:29:23.969167 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:23.973154 1466459 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0131 03:29:23.973216 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0131 03:29:24.010928 1466459 cri.go:89] found id: ""
	I0131 03:29:24.010956 1466459 logs.go:284] 0 containers: []
	W0131 03:29:24.010963 1466459 logs.go:286] No container was found matching "kindnet"
	I0131 03:29:24.010970 1466459 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0131 03:29:24.011026 1466459 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0131 03:29:24.052588 1466459 cri.go:89] found id: "31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.052614 1466459 cri.go:89] found id: ""
	I0131 03:29:24.052622 1466459 logs.go:284] 1 containers: [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9]
	I0131 03:29:24.052678 1466459 ssh_runner.go:195] Run: which crictl
	I0131 03:29:24.056735 1466459 logs.go:123] Gathering logs for kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] ...
	I0131 03:29:24.056762 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c"
	I0131 03:29:24.105290 1466459 logs.go:123] Gathering logs for kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] ...
	I0131 03:29:24.105324 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b"
	I0131 03:29:24.152634 1466459 logs.go:123] Gathering logs for etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] ...
	I0131 03:29:24.152678 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666"
	I0131 03:29:24.198981 1466459 logs.go:123] Gathering logs for coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] ...
	I0131 03:29:24.199021 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71"
	I0131 03:29:24.247140 1466459 logs.go:123] Gathering logs for kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] ...
	I0131 03:29:24.247172 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a"
	I0131 03:29:24.287472 1466459 logs.go:123] Gathering logs for kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] ...
	I0131 03:29:24.287502 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b"
	I0131 03:29:24.344060 1466459 logs.go:123] Gathering logs for storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] ...
	I0131 03:29:24.344101 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9"
	I0131 03:29:24.384811 1466459 logs.go:123] Gathering logs for CRI-O ...
	I0131 03:29:24.384846 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0131 03:29:24.707577 1466459 logs.go:123] Gathering logs for container status ...
	I0131 03:29:24.707628 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0131 03:29:24.756450 1466459 logs.go:123] Gathering logs for kubelet ...
	I0131 03:29:24.756490 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0131 03:29:24.844886 1466459 logs.go:123] Gathering logs for dmesg ...
	I0131 03:29:24.844935 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0131 03:29:24.859102 1466459 logs.go:123] Gathering logs for describe nodes ...
	I0131 03:29:24.859132 1466459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0131 03:29:27.482952 1466459 system_pods.go:59] 8 kube-system pods found
	I0131 03:29:27.482992 1466459 system_pods.go:61] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.483000 1466459 system_pods.go:61] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.483007 1466459 system_pods.go:61] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.483027 1466459 system_pods.go:61] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.483038 1466459 system_pods.go:61] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.483049 1466459 system_pods.go:61] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.483056 1466459 system_pods.go:61] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.483066 1466459 system_pods.go:61] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.483076 1466459 system_pods.go:74] duration metric: took 3.76903179s to wait for pod list to return data ...
	I0131 03:29:27.483087 1466459 default_sa.go:34] waiting for default service account to be created ...
	I0131 03:29:27.486092 1466459 default_sa.go:45] found service account: "default"
	I0131 03:29:27.486121 1466459 default_sa.go:55] duration metric: took 3.025473ms for default service account to be created ...
	I0131 03:29:27.486131 1466459 system_pods.go:116] waiting for k8s-apps to be running ...
	I0131 03:29:27.491964 1466459 system_pods.go:86] 8 kube-system pods found
	I0131 03:29:27.491989 1466459 system_pods.go:89] "coredns-5dd5756b68-bnt4w" [f4c92e2c-38c9-4c69-9ad3-a080b528f55b] Running
	I0131 03:29:27.491997 1466459 system_pods.go:89] "etcd-embed-certs-958254" [6ad404bb-5f8b-44b5-88e7-ad936bf8a8ed] Running
	I0131 03:29:27.492004 1466459 system_pods.go:89] "kube-apiserver-embed-certs-958254" [d18de9cf-b862-4c5c-bf50-da40518ceaa8] Running
	I0131 03:29:27.492010 1466459 system_pods.go:89] "kube-controller-manager-embed-certs-958254" [dd39f3de-6c41-4966-a9d7-458ef79853ab] Running
	I0131 03:29:27.492015 1466459 system_pods.go:89] "kube-proxy-2n2v5" [de4679d4-8107-4a80-ba07-ce446e1e5d60] Running
	I0131 03:29:27.492022 1466459 system_pods.go:89] "kube-scheduler-embed-certs-958254" [84255816-0f42-4287-a3b2-fb23ff086c5c] Running
	I0131 03:29:27.492032 1466459 system_pods.go:89] "metrics-server-57f55c9bc5-dj7l2" [9a313a14-a142-46ad-8b24-f8ab75f92fa5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0131 03:29:27.492044 1466459 system_pods.go:89] "storage-provisioner" [019a6865-9ffb-4987-91d6-b679aaea9176] Running
	I0131 03:29:27.492059 1466459 system_pods.go:126] duration metric: took 5.920402ms to wait for k8s-apps to be running ...
	I0131 03:29:27.492076 1466459 system_svc.go:44] waiting for kubelet service to be running ....
	I0131 03:29:27.492131 1466459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 03:29:27.507857 1466459 system_svc.go:56] duration metric: took 15.770556ms WaitForService to wait for kubelet.
	I0131 03:29:27.507891 1466459 kubeadm.go:581] duration metric: took 4m15.890307101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0131 03:29:27.507918 1466459 node_conditions.go:102] verifying NodePressure condition ...
	I0131 03:29:27.510942 1466459 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0131 03:29:27.510968 1466459 node_conditions.go:123] node cpu capacity is 2
	I0131 03:29:27.510980 1466459 node_conditions.go:105] duration metric: took 3.056564ms to run NodePressure ...
	I0131 03:29:27.510992 1466459 start.go:228] waiting for startup goroutines ...
	I0131 03:29:27.510998 1466459 start.go:233] waiting for cluster config update ...
	I0131 03:29:27.511008 1466459 start.go:242] writing updated cluster config ...
	I0131 03:29:27.511334 1466459 ssh_runner.go:195] Run: rm -f paused
	I0131 03:29:27.564506 1466459 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0131 03:29:27.566730 1466459 out.go:177] * Done! kubectl is now configured to use "embed-certs-958254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Wed 2024-01-31 03:19:45 UTC, ends at Wed 2024-01-31 03:43:18 UTC. --
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.317353462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672598317336333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d75f0276-c14d-4ab2-ad16-fb374f70474c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.318010751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4db4836d-6071-427e-b250-169ffe5eb0cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.318079207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4db4836d-6071-427e-b250-169ffe5eb0cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.318992731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4db4836d-6071-427e-b250-169ffe5eb0cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.361876752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8be2bdb1-a007-46b6-9a82-50a901753bec name=/runtime.v1.RuntimeService/Version
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.361955926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8be2bdb1-a007-46b6-9a82-50a901753bec name=/runtime.v1.RuntimeService/Version
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.363338600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=332b5e20-7081-418e-a4df-bbd64709b9f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.363732918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672598363714178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=332b5e20-7081-418e-a4df-bbd64709b9f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.364662841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4352ca8e-1f88-45e6-8ff5-7dc7c7944d52 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.364731246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4352ca8e-1f88-45e6-8ff5-7dc7c7944d52 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.364985893Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4352ca8e-1f88-45e6-8ff5-7dc7c7944d52 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.404977960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=543f0df5-3981-45f8-91c3-2052dcb4acd0 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.405036108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=543f0df5-3981-45f8-91c3-2052dcb4acd0 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.405937775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e5b79778-3e6b-4f45-aefb-17bcb4f73d1f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.406976183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672598406923503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e5b79778-3e6b-4f45-aefb-17bcb4f73d1f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.407560660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5cc3cd79-59ed-49b4-b437-5ecf1eb70128 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.407607024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5cc3cd79-59ed-49b4-b437-5ecf1eb70128 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.407784525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5cc3cd79-59ed-49b4-b437-5ecf1eb70128 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.440476390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b50da7da-52d7-4c95-87ab-65a1e379d2c5 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.440531051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b50da7da-52d7-4c95-87ab-65a1e379d2c5 name=/runtime.v1.RuntimeService/Version
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.442083264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fe004321-73db-4962-ae36-5a409fb5417a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.442546785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706672598442532856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fe004321-73db-4962-ae36-5a409fb5417a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.443588136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=73c08944-c684-4a7c-bc33-b5eef0d42689 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.443632101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=73c08944-c684-4a7c-bc33-b5eef0d42689 name=/runtime.v1.RuntimeService/ListContainers
	Jan 31 03:43:18 embed-certs-958254 crio[702]: time="2024-01-31 03:43:18.443810201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9,PodSandboxId:d872a54f28ec3d515a97a89239fe9d18a4439ed5d23c67371a90db8c0263fab6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706671514865294927,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 019a6865-9ffb-4987-91d6-b679aaea9176,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7b76e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a,PodSandboxId:642e1c2de3a0230712aee73edc13887afd1ee2edb3fca11f51c6c93a281a5786,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706671514038911891,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2n2v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4679d4-8107-4a80-ba07-ce446e1e5d60,},Annotations:map[string]string{io.kubernetes.container.hash: 94e72910,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71,PodSandboxId:274e38d2caab4bff7c37be00a9a0e55f02a1ea8b62ee915b32c17da407fc5bad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706671513628896127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bnt4w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c92e2c-38c9-4c69-9ad3-a080b528f55b,},Annotations:map[string]string{io.kubernetes.container.hash: 3153068c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666,PodSandboxId:d68b0b3c616fdce874d6290e61da677e9ad64ad3524a251a2566382e5bc1d4ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706671491471571511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86c89d8de35203c4937d336ffd049f0c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 72c2f5cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c,PodSandboxId:9d8571f608d3a2d5eaf5ffff214cf7052f7ae0c14574eefbd7a4524956d09655,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706671491288975878,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 215fcbbaabfad6adf8979dd73cdbd119,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b,PodSandboxId:c52036d797def3b2169eacb407337b3c26e02a2050835fbf9dbf68077e0eff65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706671490863144055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96a0a87e8db68805776126
f88fed9f9e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b,PodSandboxId:e48e454bfa0e345484da5933a2f4a08f609c726eab8f86cd8cfc28db75d7d5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706671490677347330,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-958254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d77138dddca85a7e1089e836159cf396
,},Annotations:map[string]string{io.kubernetes.container.hash: 5f61b206,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=73c08944-c684-4a7c-bc33-b5eef0d42689 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	31a6175cd71fb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   d872a54f28ec3       storage-provisioner
	282758b49ba0b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   18 minutes ago      Running             kube-proxy                0                   642e1c2de3a02       kube-proxy-2n2v5
	6327cb1857367       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 minutes ago      Running             coredns                   0                   274e38d2caab4       coredns-5dd5756b68-bnt4w
	dee610ad050a0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   18 minutes ago      Running             etcd                      2                   d68b0b3c616fd       etcd-embed-certs-958254
	053f8db5e01cb       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   18 minutes ago      Running             kube-scheduler            2                   9d8571f608d3a       kube-scheduler-embed-certs-958254
	4173b9783cb73       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   18 minutes ago      Running             kube-controller-manager   2                   c52036d797def       kube-controller-manager-embed-certs-958254
	60fadb7138826       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   18 minutes ago      Running             kube-apiserver            2                   e48e454bfa0e3       kube-apiserver-embed-certs-958254
	
	
	==> coredns [6327cb18573679aa49ecf0fedd4979f330c5c341a3f1a07ec3a22f8487306d71] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-958254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-958254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6311e496aefb62bd53fcfd0fb6b150999d9424
	                    minikube.k8s.io/name=embed-certs-958254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_31T03_24_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jan 2024 03:24:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-958254
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jan 2024 03:43:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jan 2024 03:40:38 +0000   Wed, 31 Jan 2024 03:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jan 2024 03:40:38 +0000   Wed, 31 Jan 2024 03:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jan 2024 03:40:38 +0000   Wed, 31 Jan 2024 03:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jan 2024 03:40:38 +0000   Wed, 31 Jan 2024 03:24:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    embed-certs-958254
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 635c5eb7349e4485a95c285d27353b0b
	  System UUID:                635c5eb7-349e-4485-a95c-285d27353b0b
	  Boot ID:                    2db96187-effc-4aaf-ac8e-36b129cbf8c3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-bnt4w                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-embed-certs-958254                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-embed-certs-958254             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-embed-certs-958254    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-2n2v5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-embed-certs-958254             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-dj7l2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-958254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-958254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-958254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node embed-certs-958254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node embed-certs-958254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node embed-certs-958254 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             18m                kubelet          Node embed-certs-958254 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                kubelet          Node embed-certs-958254 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-958254 event: Registered Node embed-certs-958254 in Controller
	
	
	==> dmesg <==
	[Jan31 03:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063264] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.529417] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.872157] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134284] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.417081] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.402692] systemd-fstab-generator[628]: Ignoring "noauto" for root device
	[  +0.133707] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.195280] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.128863] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.291918] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[Jan31 03:20] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[ +20.110355] kauditd_printk_skb: 29 callbacks suppressed
	[Jan31 03:24] systemd-fstab-generator[3458]: Ignoring "noauto" for root device
	[  +9.288294] systemd-fstab-generator[3784]: Ignoring "noauto" for root device
	[Jan31 03:25] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [dee610ad050a0b5755a0b72989191c44952dea92a57950c51ce64f7ea90d8666] <==
	{"level":"info","ts":"2024-01-31T03:24:53.196712Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","added-peer-id":"457e62b9766c4f6a","added-peer-peer-urls":["https://192.168.39.232:2380"]}
	{"level":"info","ts":"2024-01-31T03:24:53.341777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:53.34192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:53.341971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgPreVoteResp from 457e62b9766c4f6a at term 1"}
	{"level":"info","ts":"2024-01-31T03:24:53.342013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became candidate at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.342082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a received MsgVoteResp from 457e62b9766c4f6a at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.342119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"457e62b9766c4f6a became leader at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.342153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 457e62b9766c4f6a elected leader 457e62b9766c4f6a at term 2"}
	{"level":"info","ts":"2024-01-31T03:24:53.343688Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"457e62b9766c4f6a","local-member-attributes":"{Name:embed-certs-958254 ClientURLs:[https://192.168.39.232:2379]}","request-path":"/0/members/457e62b9766c4f6a/attributes","cluster-id":"6f6de64b207a208a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-31T03:24:53.344369Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:53.34454Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:53.34494Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-31T03:24:53.345094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-31T03:24:53.345314Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-31T03:24:53.3496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-31T03:24:53.346177Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.232:2379"}
	{"level":"info","ts":"2024-01-31T03:24:53.3554Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f6de64b207a208a","local-member-id":"457e62b9766c4f6a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:53.355487Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:24:53.355553Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-31T03:34:53.991217Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-01-31T03:34:53.994075Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.19552ms","hash":782478356}
	{"level":"info","ts":"2024-01-31T03:34:53.994161Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":782478356,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-01-31T03:39:53.999931Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-01-31T03:39:54.002882Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":957,"took":"2.103279ms","hash":3373281917}
	{"level":"info","ts":"2024-01-31T03:39:54.003915Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3373281917,"revision":957,"compact-revision":714}
	
	
	==> kernel <==
	 03:43:18 up 23 min,  0 users,  load average: 0.03, 0.10, 0.09
	Linux embed-certs-958254 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [60fadb7138826ffe061298b78e3938b941725b09d6444396a476839aeb179a1b] <==
	I0131 03:39:55.626876       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:39:56.626708       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:39:56.626805       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:39:56.626831       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:39:56.626909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:39:56.626984       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:39:56.628168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:40:55.514330       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:40:56.627428       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:40:56.627544       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:40:56.627570       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:40:56.628780       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:40:56.628866       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:40:56.628891       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0131 03:41:55.513821       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0131 03:42:55.514011       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0131 03:42:56.627985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:42:56.628022       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0131 03:42:56.628028       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0131 03:42:56.629192       1 handler_proxy.go:93] no RequestInfo found in the context
	E0131 03:42:56.629411       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0131 03:42:56.629445       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4173b9783cb7339fff6b3b90c355a83f9f1760906eec2cf7f55510253156f99b] <==
	I0131 03:37:41.727007       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:11.168792       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:11.734763       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:38:41.176839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:38:41.743523       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:39:11.183411       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:39:11.759951       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:39:41.190078       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:39:41.770673       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:40:11.196435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:40:11.778378       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:40:41.202127       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:40:41.786332       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:41:11.208096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:41:11.797681       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0131 03:41:11.951385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="276.905µs"
	I0131 03:41:26.955633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="339.665µs"
	E0131 03:41:41.214738       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:41:41.806376       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:42:11.220330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:42:11.814837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:42:41.226346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:42:41.823366       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0131 03:43:11.232151       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0131 03:43:11.833174       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [282758b49ba0bea3574046556ae9ccec38078ba37032735391ee7cdbc3313a4a] <==
	I0131 03:25:14.718630       1 server_others.go:69] "Using iptables proxy"
	I0131 03:25:14.760680       1 node.go:141] Successfully retrieved node IP: 192.168.39.232
	I0131 03:25:14.862740       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0131 03:25:14.862896       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0131 03:25:14.879584       1 server_others.go:152] "Using iptables Proxier"
	I0131 03:25:14.879696       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0131 03:25:14.880224       1 server.go:846] "Version info" version="v1.28.4"
	I0131 03:25:14.880303       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0131 03:25:14.882013       1 config.go:188] "Starting service config controller"
	I0131 03:25:14.882970       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0131 03:25:14.883021       1 config.go:315] "Starting node config controller"
	I0131 03:25:14.883030       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0131 03:25:14.890186       1 config.go:97] "Starting endpoint slice config controller"
	I0131 03:25:14.890312       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0131 03:25:14.985214       1 shared_informer.go:318] Caches are synced for node config
	I0131 03:25:14.985218       1 shared_informer.go:318] Caches are synced for service config
	I0131 03:25:14.991518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [053f8db5e01cbf4e5d13708ad41abddde4870a535eb23dc53375ec22364c280c] <==
	W0131 03:24:55.627356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:55.627852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:55.627390       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0131 03:24:55.627913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0131 03:24:56.457113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:56.457335       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:56.520133       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0131 03:24:56.520278       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0131 03:24:56.523514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:56.523598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:56.686225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0131 03:24:56.686419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0131 03:24:56.689041       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0131 03:24:56.689099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0131 03:24:56.703743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0131 03:24:56.703788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0131 03:24:56.711766       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0131 03:24:56.711808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0131 03:24:56.886531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0131 03:24:56.886625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0131 03:24:56.897130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0131 03:24:56.897286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0131 03:24:56.913515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0131 03:24:56.913606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0131 03:24:59.121168       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-01-31 03:19:45 UTC, ends at Wed 2024-01-31 03:43:18 UTC. --
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]: E0131 03:40:59.944843    3791 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]: E0131 03:40:59.944891    3791 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]: E0131 03:40:59.945138    3791 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6dzzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-dj7l2_kube-system(9a313a14-a142-46ad-8b24-f8ab75f92fa5): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 31 03:40:59 embed-certs-958254 kubelet[3791]: E0131 03:40:59.945182    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:41:11 embed-certs-958254 kubelet[3791]: E0131 03:41:11.933595    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:41:26 embed-certs-958254 kubelet[3791]: E0131 03:41:26.936105    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:41:39 embed-certs-958254 kubelet[3791]: E0131 03:41:39.934035    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:41:52 embed-certs-958254 kubelet[3791]: E0131 03:41:52.934302    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:41:59 embed-certs-958254 kubelet[3791]: E0131 03:41:59.022874    3791 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:41:59 embed-certs-958254 kubelet[3791]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:41:59 embed-certs-958254 kubelet[3791]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:41:59 embed-certs-958254 kubelet[3791]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:42:05 embed-certs-958254 kubelet[3791]: E0131 03:42:05.933424    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:42:19 embed-certs-958254 kubelet[3791]: E0131 03:42:19.935677    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:42:34 embed-certs-958254 kubelet[3791]: E0131 03:42:34.934643    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:42:49 embed-certs-958254 kubelet[3791]: E0131 03:42:49.933685    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:42:59 embed-certs-958254 kubelet[3791]: E0131 03:42:59.017573    3791 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 31 03:42:59 embed-certs-958254 kubelet[3791]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 31 03:42:59 embed-certs-958254 kubelet[3791]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 31 03:42:59 embed-certs-958254 kubelet[3791]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 31 03:43:04 embed-certs-958254 kubelet[3791]: E0131 03:43:04.934301    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	Jan 31 03:43:18 embed-certs-958254 kubelet[3791]: E0131 03:43:18.936457    3791 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dj7l2" podUID="9a313a14-a142-46ad-8b24-f8ab75f92fa5"
	
	
	==> storage-provisioner [31a6175cd71fb898356642b4fa6d74de88ce2f85e39f4aab7c30a426c8f9d3d9] <==
	I0131 03:25:15.037808       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0131 03:25:15.067714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0131 03:25:15.067812       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0131 03:25:15.117698       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0131 03:25:15.117950       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-958254_f815b7d2-e7cf-4663-87f8-8d4d338bc705!
	I0131 03:25:15.120871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"151c31f5-d93d-432f-89fe-6f972c6676bb", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-958254_f815b7d2-e7cf-4663-87f8-8d4d338bc705 became leader
	I0131 03:25:15.218568       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-958254_f815b7d2-e7cf-4663-87f8-8d4d338bc705!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-958254 -n embed-certs-958254
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-958254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dj7l2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-958254 describe pod metrics-server-57f55c9bc5-dj7l2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-958254 describe pod metrics-server-57f55c9bc5-dj7l2: exit status 1 (63.796055ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dj7l2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-958254 describe pod metrics-server-57f55c9bc5-dj7l2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (288.89s)

                                                
                                    

Test pass (235/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.95
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.16
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 24.3
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 14.12
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.62
31 TestOffline 94.08
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 212.28
38 TestAddons/parallel/Registry 16.84
40 TestAddons/parallel/InspektorGadget 17.49
41 TestAddons/parallel/MetricsServer 5.95
42 TestAddons/parallel/HelmTiller 21.84
44 TestAddons/parallel/CSI 75.59
45 TestAddons/parallel/Headlamp 15.71
46 TestAddons/parallel/CloudSpanner 5.72
47 TestAddons/parallel/LocalPath 15.28
48 TestAddons/parallel/NvidiaDevicePlugin 5.97
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 75.86
55 TestCertExpiration 302.33
57 TestForceSystemdFlag 104.98
58 TestForceSystemdEnv 47.8
60 TestKVMDriverInstallOrUpdate 4.45
64 TestErrorSpam/setup 44.97
65 TestErrorSpam/start 0.41
66 TestErrorSpam/status 0.82
67 TestErrorSpam/pause 1.54
68 TestErrorSpam/unpause 1.7
69 TestErrorSpam/stop 2.28
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.35
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 29.56
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.24
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 41.24
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.51
92 TestFunctional/serial/LogsFileCmd 1.48
93 TestFunctional/serial/InvalidService 4.68
95 TestFunctional/parallel/ConfigCmd 0.49
96 TestFunctional/parallel/DashboardCmd 17.5
97 TestFunctional/parallel/DryRun 0.4
98 TestFunctional/parallel/InternationalLanguage 0.2
99 TestFunctional/parallel/StatusCmd 1.32
103 TestFunctional/parallel/ServiceCmdConnect 8.83
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 46.82
107 TestFunctional/parallel/SSHCmd 0.55
108 TestFunctional/parallel/CpCmd 1.58
109 TestFunctional/parallel/MySQL 32.12
110 TestFunctional/parallel/FileSync 0.27
111 TestFunctional/parallel/CertSync 1.47
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
119 TestFunctional/parallel/License 0.95
120 TestFunctional/parallel/ServiceCmd/DeployApp 13.22
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
122 TestFunctional/parallel/ProfileCmd/profile_list 0.43
123 TestFunctional/parallel/MountCmd/any-port 11.85
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
125 TestFunctional/parallel/Version/short 0.07
126 TestFunctional/parallel/Version/components 0.94
127 TestFunctional/parallel/MountCmd/specific-port 1.83
128 TestFunctional/parallel/ServiceCmd/List 0.56
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
132 TestFunctional/parallel/ServiceCmd/Format 0.45
133 TestFunctional/parallel/ServiceCmd/URL 0.45
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.41
147 TestFunctional/parallel/ImageCommands/ImageBuild 4.74
148 TestFunctional/parallel/ImageCommands/Setup 1.98
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestIngressAddonLegacy/StartLegacyK8sCluster 92.2
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.42
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
172 TestJSONOutput/start/Command 60.7
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.64
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.67
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.23
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 93.75
204 TestMountStart/serial/StartWithMountFirst 29.36
205 TestMountStart/serial/VerifyMountFirst 0.41
206 TestMountStart/serial/StartWithMountSecond 29.14
207 TestMountStart/serial/VerifyMountSecond 0.42
208 TestMountStart/serial/DeleteFirst 0.89
209 TestMountStart/serial/VerifyMountPostDelete 0.42
210 TestMountStart/serial/Stop 1.21
211 TestMountStart/serial/RestartStopped 23.74
212 TestMountStart/serial/VerifyMountPostStop 0.41
215 TestMultiNode/serial/FreshStart2Nodes 152.62
216 TestMultiNode/serial/DeployApp2Nodes 5.85
217 TestMultiNode/serial/PingHostFrom2Pods 0.94
218 TestMultiNode/serial/AddNode 42.53
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.23
221 TestMultiNode/serial/CopyFile 7.9
222 TestMultiNode/serial/StopNode 2.24
223 TestMultiNode/serial/StartAfterStop 29.47
225 TestMultiNode/serial/DeleteNode 1.79
227 TestMultiNode/serial/RestartMultiNode 444.76
228 TestMultiNode/serial/ValidateNameConflict 48.59
235 TestScheduledStopUnix 118.57
239 TestRunningBinaryUpgrade 223.63
241 TestKubernetesUpgrade 248.68
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
245 TestNoKubernetes/serial/StartWithK8s 100.48
253 TestNetworkPlugins/group/false 3.65
264 TestStoppedBinaryUpgrade/Setup 2.19
265 TestStoppedBinaryUpgrade/Upgrade 160.98
266 TestNoKubernetes/serial/StartWithStopK8s 64.79
267 TestNoKubernetes/serial/Start 28.86
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
269 TestNoKubernetes/serial/ProfileList 22.5
270 TestNoKubernetes/serial/Stop 1.26
271 TestNoKubernetes/serial/StartNoArgs 40.14
273 TestPause/serial/Start 99.96
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
277 TestNetworkPlugins/group/auto/Start 75.23
278 TestNetworkPlugins/group/kindnet/Start 81.32
279 TestNetworkPlugins/group/calico/Start 125.92
280 TestNetworkPlugins/group/auto/KubeletFlags 0.33
281 TestNetworkPlugins/group/auto/NetCatPod 11.35
282 TestNetworkPlugins/group/auto/DNS 0.2
283 TestNetworkPlugins/group/auto/Localhost 0.15
284 TestNetworkPlugins/group/auto/HairPin 0.16
285 TestNetworkPlugins/group/custom-flannel/Start 87.27
286 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
287 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
288 TestNetworkPlugins/group/kindnet/NetCatPod 12.28
289 TestNetworkPlugins/group/kindnet/DNS 0.21
290 TestNetworkPlugins/group/kindnet/Localhost 0.16
291 TestNetworkPlugins/group/kindnet/HairPin 0.16
292 TestNetworkPlugins/group/enable-default-cni/Start 70
293 TestNetworkPlugins/group/calico/ControllerPod 6.01
294 TestNetworkPlugins/group/calico/KubeletFlags 0.25
295 TestNetworkPlugins/group/calico/NetCatPod 14.24
296 TestNetworkPlugins/group/calico/DNS 0.2
297 TestNetworkPlugins/group/calico/Localhost 0.16
298 TestNetworkPlugins/group/calico/HairPin 0.18
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.36
301 TestNetworkPlugins/group/flannel/Start 89.79
302 TestNetworkPlugins/group/bridge/Start 93.56
303 TestNetworkPlugins/group/custom-flannel/DNS 0.19
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
306 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
307 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
309 TestStartStop/group/old-k8s-version/serial/FirstStart 154.19
310 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
311 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
312 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
314 TestStartStop/group/no-preload/serial/FirstStart 123.22
315 TestNetworkPlugins/group/flannel/ControllerPod 6.01
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
317 TestNetworkPlugins/group/flannel/NetCatPod 12.24
318 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
319 TestNetworkPlugins/group/bridge/NetCatPod 12.28
320 TestNetworkPlugins/group/flannel/DNS 0.2
321 TestNetworkPlugins/group/flannel/Localhost 0.16
322 TestNetworkPlugins/group/flannel/HairPin 0.16
323 TestNetworkPlugins/group/bridge/DNS 0.24
324 TestNetworkPlugins/group/bridge/Localhost 0.18
325 TestNetworkPlugins/group/bridge/HairPin 0.18
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.24
329 TestStartStop/group/newest-cni/serial/FirstStart 83.23
330 TestStartStop/group/no-preload/serial/DeployApp 10.31
331 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
333 TestStartStop/group/old-k8s-version/serial/DeployApp 11.4
334 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
341 TestStartStop/group/newest-cni/serial/Stop 3.12
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
343 TestStartStop/group/newest-cni/serial/SecondStart 48.55
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
347 TestStartStop/group/newest-cni/serial/Pause 2.69
349 TestStartStop/group/embed-certs/serial/FirstStart 63.77
351 TestStartStop/group/no-preload/serial/SecondStart 695.14
353 TestStartStop/group/embed-certs/serial/DeployApp 10.27
355 TestStartStop/group/old-k8s-version/serial/SecondStart 710.41
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
358 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 868.44
360 TestStartStop/group/embed-certs/serial/SecondStart 744.59
x
+
TestDownloadOnly/v1.16.0/json-events (23.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-854494 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-854494 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.950783763s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-854494
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-854494: exit status 85 (85.400106ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-854494 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC |          |
	|         | -p download-only-854494        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:04:01
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:04:01.469605 1419988 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:04:01.469745 1419988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:04:01.469755 1419988 out.go:309] Setting ErrFile to fd 2...
	I0131 02:04:01.469760 1419988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:04:01.469961 1419988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	W0131 02:04:01.470121 1419988 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18051-1412717/.minikube/config/config.json: open /home/jenkins/minikube-integration/18051-1412717/.minikube/config/config.json: no such file or directory
	I0131 02:04:01.470775 1419988 out.go:303] Setting JSON to true
	I0131 02:04:01.471707 1419988 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":24385,"bootTime":1706642257,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:04:01.471783 1419988 start.go:138] virtualization: kvm guest
	I0131 02:04:01.474100 1419988 out.go:97] [download-only-854494] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:04:01.475762 1419988 out.go:169] MINIKUBE_LOCATION=18051
	W0131 02:04:01.474222 1419988 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball: no such file or directory
	I0131 02:04:01.474319 1419988 notify.go:220] Checking for updates...
	I0131 02:04:01.478461 1419988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:04:01.479905 1419988 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:04:01.481242 1419988 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:04:01.482595 1419988 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0131 02:04:01.485247 1419988 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0131 02:04:01.485500 1419988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:04:01.519702 1419988 out.go:97] Using the kvm2 driver based on user configuration
	I0131 02:04:01.519735 1419988 start.go:298] selected driver: kvm2
	I0131 02:04:01.519742 1419988 start.go:902] validating driver "kvm2" against <nil>
	I0131 02:04:01.520100 1419988 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:04:01.520242 1419988 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:04:01.535537 1419988 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:04:01.535629 1419988 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 02:04:01.536085 1419988 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0131 02:04:01.536242 1419988 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0131 02:04:01.536332 1419988 cni.go:84] Creating CNI manager for ""
	I0131 02:04:01.536341 1419988 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:04:01.536353 1419988 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0131 02:04:01.536359 1419988 start_flags.go:321] config:
	{Name:download-only-854494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-854494 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:04:01.536566 1419988 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:04:01.538623 1419988 out.go:97] Downloading VM boot image ...
	I0131 02:04:01.538669 1419988 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0131 02:04:10.151306 1419988 out.go:97] Starting control plane node download-only-854494 in cluster download-only-854494
	I0131 02:04:10.151377 1419988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 02:04:10.251713 1419988 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0131 02:04:10.251757 1419988 cache.go:56] Caching tarball of preloaded images
	I0131 02:04:10.251955 1419988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0131 02:04:10.253914 1419988 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0131 02:04:10.253934 1419988 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:04:10.358277 1419988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-854494"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-854494
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (24.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-407605 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-407605 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.298128939s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (24.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-407605
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-407605: exit status 85 (79.111258ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-854494 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC |                     |
	|         | -p download-only-854494        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC | 31 Jan 24 02:04 UTC |
	| delete  | -p download-only-854494        | download-only-854494 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC | 31 Jan 24 02:04 UTC |
	| start   | -o=json --download-only        | download-only-407605 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC |                     |
	|         | -p download-only-407605        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:04:25
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:04:25.808613 1420175 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:04:25.808902 1420175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:04:25.808913 1420175 out.go:309] Setting ErrFile to fd 2...
	I0131 02:04:25.808918 1420175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:04:25.809164 1420175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:04:25.809826 1420175 out.go:303] Setting JSON to true
	I0131 02:04:25.810866 1420175 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":24409,"bootTime":1706642257,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:04:25.810930 1420175 start.go:138] virtualization: kvm guest
	I0131 02:04:25.813275 1420175 out.go:97] [download-only-407605] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:04:25.814780 1420175 out.go:169] MINIKUBE_LOCATION=18051
	I0131 02:04:25.813411 1420175 notify.go:220] Checking for updates...
	I0131 02:04:25.817943 1420175 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:04:25.819876 1420175 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:04:25.821411 1420175 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:04:25.822772 1420175 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0131 02:04:25.825256 1420175 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0131 02:04:25.825541 1420175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:04:25.857804 1420175 out.go:97] Using the kvm2 driver based on user configuration
	I0131 02:04:25.857847 1420175 start.go:298] selected driver: kvm2
	I0131 02:04:25.857857 1420175 start.go:902] validating driver "kvm2" against <nil>
	I0131 02:04:25.858223 1420175 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:04:25.858330 1420175 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:04:25.873653 1420175 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:04:25.873728 1420175 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 02:04:25.874253 1420175 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0131 02:04:25.874427 1420175 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0131 02:04:25.874532 1420175 cni.go:84] Creating CNI manager for ""
	I0131 02:04:25.874551 1420175 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:04:25.874568 1420175 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0131 02:04:25.874582 1420175 start_flags.go:321] config:
	{Name:download-only-407605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-407605 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:04:25.874785 1420175 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:04:25.876390 1420175 out.go:97] Starting control plane node download-only-407605 in cluster download-only-407605
	I0131 02:04:25.876416 1420175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:04:26.249259 1420175 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 02:04:26.249309 1420175 cache.go:56] Caching tarball of preloaded images
	I0131 02:04:26.249491 1420175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:04:26.251637 1420175 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0131 02:04:26.251660 1420175 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:04:26.349911 1420175 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0131 02:04:39.766276 1420175 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:04:39.766376 1420175 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:04:40.698804 1420175 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0131 02:04:40.699161 1420175 profile.go:148] Saving config to /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/download-only-407605/config.json ...
	I0131 02:04:40.699191 1420175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/download-only-407605/config.json: {Name:mk1252115776f9a8a17cde89eed2f59cf9b67cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0131 02:04:40.699365 1420175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0131 02:04:40.699504 1420175 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-407605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-407605
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (14.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319090 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319090 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.121362782s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (14.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319090
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319090: exit status 85 (83.666961ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-854494 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC |                     |
	|         | -p download-only-854494           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC | 31 Jan 24 02:04 UTC |
	| delete  | -p download-only-854494           | download-only-854494 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC | 31 Jan 24 02:04 UTC |
	| start   | -o=json --download-only           | download-only-407605 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC |                     |
	|         | -p download-only-407605           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC | 31 Jan 24 02:04 UTC |
	| delete  | -p download-only-407605           | download-only-407605 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC | 31 Jan 24 02:04 UTC |
	| start   | -o=json --download-only           | download-only-319090 | jenkins | v1.32.0 | 31 Jan 24 02:04 UTC |                     |
	|         | -p download-only-319090           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/31 02:04:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0131 02:04:50.472897 1420377 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:04:50.473082 1420377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:04:50.473096 1420377 out.go:309] Setting ErrFile to fd 2...
	I0131 02:04:50.473101 1420377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:04:50.473300 1420377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:04:50.473934 1420377 out.go:303] Setting JSON to true
	I0131 02:04:50.474925 1420377 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":24434,"bootTime":1706642257,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:04:50.474995 1420377 start.go:138] virtualization: kvm guest
	I0131 02:04:50.477498 1420377 out.go:97] [download-only-319090] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:04:50.479519 1420377 out.go:169] MINIKUBE_LOCATION=18051
	I0131 02:04:50.477703 1420377 notify.go:220] Checking for updates...
	I0131 02:04:50.482365 1420377 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:04:50.483844 1420377 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:04:50.485304 1420377 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:04:50.487061 1420377 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0131 02:04:50.489988 1420377 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0131 02:04:50.490259 1420377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:04:50.523670 1420377 out.go:97] Using the kvm2 driver based on user configuration
	I0131 02:04:50.523705 1420377 start.go:298] selected driver: kvm2
	I0131 02:04:50.523711 1420377 start.go:902] validating driver "kvm2" against <nil>
	I0131 02:04:50.524090 1420377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:04:50.524198 1420377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18051-1412717/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0131 02:04:50.539599 1420377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0131 02:04:50.539711 1420377 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0131 02:04:50.540229 1420377 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0131 02:04:50.540405 1420377 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0131 02:04:50.540476 1420377 cni.go:84] Creating CNI manager for ""
	I0131 02:04:50.540492 1420377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0131 02:04:50.540505 1420377 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0131 02:04:50.540513 1420377 start_flags.go:321] config:
	{Name:download-only-319090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-319090 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:04:50.540737 1420377 iso.go:125] acquiring lock: {Name:mkba4945df292b78283511238c0d1be3f219c19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0131 02:04:50.542725 1420377 out.go:97] Starting control plane node download-only-319090 in cluster download-only-319090
	I0131 02:04:50.542755 1420377 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 02:04:50.900455 1420377 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0131 02:04:50.900512 1420377 cache.go:56] Caching tarball of preloaded images
	I0131 02:04:50.900660 1420377 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0131 02:04:50.902867 1420377 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0131 02:04:50.902898 1420377 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0131 02:04:51.005095 1420377 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18051-1412717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-319090"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-319090
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-352988 --alsologtostderr --binary-mirror http://127.0.0.1:43041 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-352988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-352988
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (94.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-331647 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-331647 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.985114211s)
helpers_test.go:175: Cleaning up "offline-crio-331647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-331647
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-331647: (1.098788412s)
--- PASS: TestOffline (94.08s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-165032
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-165032: exit status 85 (80.691645ms)

                                                
                                                
-- stdout --
	* Profile "addons-165032" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-165032"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-165032
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-165032: exit status 85 (79.421764ms)

                                                
                                                
-- stdout --
	* Profile "addons-165032" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-165032"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (212.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-165032 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-165032 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.279411022s)
--- PASS: TestAddons/Setup (212.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 26.646454ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-c9zsd" [91cd4aa6-c504-47cc-a6f4-cb3df86d81c1] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004749564s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xwffk" [1e6273d6-5a09-4ec3-aa41-8745cb15c2f5] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006108126s
addons_test.go:340: (dbg) Run:  kubectl --context addons-165032 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-165032 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-165032 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.944589906s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 ip
2024/01/31 02:08:54 [DEBUG] GET http://192.168.39.232:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (17.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2xgzx" [221cbad7-3ff2-4db3-afc1-b698bc240955] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005374032s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-165032
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-165032: (11.487420732s)
--- PASS: TestAddons/parallel/InspektorGadget (17.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 11.695286ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-wwrv8" [c01fcfa3-b4c2-4ea7-a9f3-ef80086f017c] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014874156s
addons_test.go:415: (dbg) Run:  kubectl --context addons-165032 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (21.84s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.061972ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-9cnkj" [9ce39066-3c40-4a16-bb14-914a5acdcf78] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00697489s
addons_test.go:473: (dbg) Run:  kubectl --context addons-165032 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-165032 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (16.15454327s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (21.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (75.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 27.82777ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-165032 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-165032 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f41b8dfa-e4ca-4cce-978f-1054e8c6d272] Pending
helpers_test.go:344: "task-pv-pod" [f41b8dfa-e4ca-4cce-978f-1054e8c6d272] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f41b8dfa-e4ca-4cce-978f-1054e8c6d272] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.013095864s
addons_test.go:584: (dbg) Run:  kubectl --context addons-165032 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-165032 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-165032 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-165032 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-165032 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-165032 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-165032 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2b5ebf98-50d2-4c8d-8c96-1d025971fe1d] Pending
helpers_test.go:344: "task-pv-pod-restore" [2b5ebf98-50d2-4c8d-8c96-1d025971fe1d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2b5ebf98-50d2-4c8d-8c96-1d025971fe1d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003812123s
addons_test.go:626: (dbg) Run:  kubectl --context addons-165032 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-165032 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-165032 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-165032 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.819540577s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (75.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-165032 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-165032 --alsologtostderr -v=1: (1.703794918s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-gzz44" [8bea9b65-ee95-41ee-aab6-9f15286c153a] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-gzz44" [8bea9b65-ee95-41ee-aab6-9f15286c153a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-gzz44" [8bea9b65-ee95-41ee-aab6-9f15286c153a] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004574321s
--- PASS: TestAddons/parallel/Headlamp (15.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-ncgmj" [e68e7daf-f29a-4334-91de-52275599546a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003574282s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-165032
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-165032 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-165032 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b2c7b507-6dc6-410a-be86-7e1da31ff324] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b2c7b507-6dc6-410a-be86-7e1da31ff324] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b2c7b507-6dc6-410a-be86-7e1da31ff324] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004592032s
addons_test.go:891: (dbg) Run:  kubectl --context addons-165032 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 ssh "cat /opt/local-path-provisioner/pvc-acc797b0-8a0d-4af3-bfa4-607db152ba6b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-165032 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-165032 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-165032 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kqz46" [bb5127c3-731f-49ab-8391-4b9b2e955e8f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00829656s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-165032
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.97s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-jbprw" [be7581cf-1640-4563-b94c-21907584e24d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005899721s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-165032 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-165032 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (75.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-430741 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-430741 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.295065871s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-430741 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-430741 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-430741 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-430741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-430741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-430741: (1.049590789s)
--- PASS: TestCertOptions (75.86s)

                                                
                                    
x
+
TestCertExpiration (302.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-897667 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-897667 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m19.636928137s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-897667 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-897667 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.442134542s)
helpers_test.go:175: Cleaning up "cert-expiration-897667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-897667
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-897667: (1.246687623s)
--- PASS: TestCertExpiration (302.33s)

                                                
                                    
x
+
TestForceSystemdFlag (104.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-097545 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-097545 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m43.701183113s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-097545 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-097545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-097545
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-097545: (1.056755182s)
--- PASS: TestForceSystemdFlag (104.98s)

                                                
                                    
x
+
TestForceSystemdEnv (47.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-350049 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-350049 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.977350879s)
helpers_test.go:175: Cleaning up "force-systemd-env-350049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-350049
--- PASS: TestForceSystemdEnv (47.80s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.45s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.45s)

                                                
                                    
x
+
TestErrorSpam/setup (44.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-736852 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-736852 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-736852 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-736852 --driver=kvm2  --container-runtime=crio: (44.973731734s)
--- PASS: TestErrorSpam/setup (44.97s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 stop: (2.09993643s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-736852 --log_dir /tmp/nospam-736852 stop
--- PASS: TestErrorSpam/stop (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/18051-1412717/.minikube/files/etc/test/nested/copy/1419976/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618885 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-618885 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.351963114s)
--- PASS: TestFunctional/serial/StartWithProxy (59.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618885 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-618885 --alsologtostderr -v=8: (29.554622736s)
functional_test.go:659: soft start took 29.555482998s for "functional-618885" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-618885 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 cache add registry.k8s.io/pause:3.1: (1.312748323s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 cache add registry.k8s.io/pause:3.3: (1.459052858s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 cache add registry.k8s.io/pause:latest: (1.471160399s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (234.706206ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 cache reload: (1.308003864s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 kubectl -- --context functional-618885 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-618885 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618885 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-618885 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.241833098s)
functional_test.go:757: restart took 41.24206227s for "functional-618885" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-618885 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 logs: (1.507736018s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 logs --file /tmp/TestFunctionalserialLogsFileCmd758194116/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 logs --file /tmp/TestFunctionalserialLogsFileCmd758194116/001/logs.txt: (1.477635987s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-618885 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-618885
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-618885: exit status 115 (316.981832ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.221:31059 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-618885 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-618885 delete -f testdata/invalidsvc.yaml: (1.14389378s)
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 config get cpus: exit status 14 (92.142077ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 config get cpus: exit status 14 (70.097008ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618885 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-618885 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1427504: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618885 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-618885 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (199.853116ms)

                                                
                                                
-- stdout --
	* [functional-618885] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:17:50.152742 1426860 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:17:50.152946 1426860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:17:50.152958 1426860 out.go:309] Setting ErrFile to fd 2...
	I0131 02:17:50.152966 1426860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:17:50.153226 1426860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:17:50.153840 1426860 out.go:303] Setting JSON to false
	I0131 02:17:50.154976 1426860 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":25213,"bootTime":1706642257,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:17:50.155047 1426860 start.go:138] virtualization: kvm guest
	I0131 02:17:50.156983 1426860 out.go:177] * [functional-618885] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 02:17:50.158800 1426860 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 02:17:50.158812 1426860 notify.go:220] Checking for updates...
	I0131 02:17:50.160147 1426860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:17:50.161622 1426860 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:17:50.163085 1426860 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:17:50.164430 1426860 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 02:17:50.165770 1426860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 02:17:50.167617 1426860 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:17:50.168348 1426860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:17:50.168418 1426860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:17:50.190118 1426860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
	I0131 02:17:50.190726 1426860 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:17:50.191444 1426860 main.go:141] libmachine: Using API Version  1
	I0131 02:17:50.191500 1426860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:17:50.191968 1426860 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:17:50.192252 1426860 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:17:50.192601 1426860 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:17:50.193073 1426860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:17:50.193127 1426860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:17:50.215485 1426860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0131 02:17:50.216019 1426860 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:17:50.216668 1426860 main.go:141] libmachine: Using API Version  1
	I0131 02:17:50.216697 1426860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:17:50.217119 1426860 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:17:50.217357 1426860 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:17:50.266888 1426860 out.go:177] * Using the kvm2 driver based on existing profile
	I0131 02:17:50.268342 1426860 start.go:298] selected driver: kvm2
	I0131 02:17:50.268366 1426860 start.go:902] validating driver "kvm2" against &{Name:functional-618885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-618885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.221 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:17:50.268584 1426860 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 02:17:50.271244 1426860 out.go:177] 
	W0131 02:17:50.272731 1426860 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0131 02:17:50.273965 1426860 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618885 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-618885 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-618885 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (203.08745ms)

                                                
                                                
-- stdout --
	* [functional-618885] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:17:49.960177 1426791 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:17:49.960399 1426791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:17:49.960414 1426791 out.go:309] Setting ErrFile to fd 2...
	I0131 02:17:49.960424 1426791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:17:49.960917 1426791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:17:49.961705 1426791 out.go:303] Setting JSON to false
	I0131 02:17:49.963323 1426791 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":25213,"bootTime":1706642257,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 02:17:49.963420 1426791 start.go:138] virtualization: kvm guest
	I0131 02:17:49.966240 1426791 out.go:177] * [functional-618885] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0131 02:17:49.968036 1426791 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 02:17:49.968188 1426791 notify.go:220] Checking for updates...
	I0131 02:17:49.969703 1426791 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 02:17:49.971387 1426791 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 02:17:49.972976 1426791 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 02:17:49.974550 1426791 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 02:17:49.976033 1426791 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 02:17:49.977890 1426791 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:17:49.978399 1426791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:17:49.978454 1426791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:17:49.999279 1426791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0131 02:17:49.999951 1426791 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:17:50.000785 1426791 main.go:141] libmachine: Using API Version  1
	I0131 02:17:50.000824 1426791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:17:50.001398 1426791 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:17:50.002243 1426791 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:17:50.002739 1426791 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 02:17:50.003073 1426791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:17:50.003118 1426791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:17:50.018594 1426791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I0131 02:17:50.019399 1426791 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:17:50.019973 1426791 main.go:141] libmachine: Using API Version  1
	I0131 02:17:50.020013 1426791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:17:50.020747 1426791 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:17:50.021038 1426791 main.go:141] libmachine: (functional-618885) Calling .DriverName
	I0131 02:17:50.066067 1426791 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0131 02:17:50.067502 1426791 start.go:298] selected driver: kvm2
	I0131 02:17:50.067525 1426791 start.go:902] validating driver "kvm2" against &{Name:functional-618885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-618885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.221 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0131 02:17:50.067659 1426791 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 02:17:50.069734 1426791 out.go:177] 
	W0131 02:17:50.071066 1426791 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0131 02:17:50.072488 1426791 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-618885 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-618885 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-tlxrv" [fa9a6008-827a-4164-aaf1-81a0a0e6f962] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-tlxrv" [fa9a6008-827a-4164-aaf1-81a0a0e6f962] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003639253s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.221:31575
functional_test.go:1674: http://192.168.39.221:31575: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-tlxrv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.221:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.221:31575
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [322bc4ef-75a9-4667-badf-5883882cfd5c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005901369s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-618885 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-618885 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-618885 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-618885 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0cfdbe89-eaef-48a7-913b-220e392e5355] Pending
helpers_test.go:344: "sp-pod" [0cfdbe89-eaef-48a7-913b-220e392e5355] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0cfdbe89-eaef-48a7-913b-220e392e5355] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004497732s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-618885 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-618885 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-618885 delete -f testdata/storage-provisioner/pod.yaml: (2.976570198s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-618885 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [44b7c31f-e6af-404a-940c-b61a057a8112] Pending
helpers_test.go:344: "sp-pod" [44b7c31f-e6af-404a-940c-b61a057a8112] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [44b7c31f-e6af-404a-940c-b61a057a8112] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004228782s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-618885 exec sp-pod -- ls /tmp/mount
E0131 02:18:39.629812 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh -n functional-618885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cp functional-618885:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1530010990/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh -n functional-618885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh -n functional-618885 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-618885 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-4ttkv" [be6ad4e7-1b11-4382-8cb1-6efe8ee237bb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-4ttkv" [be6ad4e7-1b11-4382-8cb1-6efe8ee237bb] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.005105424s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-618885 exec mysql-859648c796-4ttkv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-618885 exec mysql-859648c796-4ttkv -- mysql -ppassword -e "show databases;": exit status 1 (147.975161ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0131 02:18:38.351270 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.357105 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.367365 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.387624 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.427919 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.508302 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.668818 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:38.989482 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
functional_test.go:1806: (dbg) Run:  kubectl --context functional-618885 exec mysql-859648c796-4ttkv -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-618885 exec mysql-859648c796-4ttkv -- mysql -ppassword -e "show databases;": exit status 1 (172.096128ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-618885 exec mysql-859648c796-4ttkv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1419976/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /etc/test/nested/copy/1419976/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1419976.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /etc/ssl/certs/1419976.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1419976.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /usr/share/ca-certificates/1419976.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/14199762.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /etc/ssl/certs/14199762.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/14199762.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /usr/share/ca-certificates/14199762.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-618885 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh "sudo systemctl is-active docker": exit status 1 (487.848924ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh "sudo systemctl is-active containerd": exit status 1 (348.109448ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-618885 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-618885 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-kbvdm" [6deb205e-b9f4-44f1-b127-934381c0fef6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-kbvdm" [6deb205e-b9f4-44f1-b127-934381c0fef6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.005209223s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "345.989666ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "80.337366ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdany-port1247703334/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1706667468785455353" to /tmp/TestFunctionalparallelMountCmdany-port1247703334/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1706667468785455353" to /tmp/TestFunctionalparallelMountCmdany-port1247703334/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1706667468785455353" to /tmp/TestFunctionalparallelMountCmdany-port1247703334/001/test-1706667468785455353
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.399999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 31 02:17 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 31 02:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 31 02:17 test-1706667468785455353
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh cat /mount-9p/test-1706667468785455353
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-618885 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f001beac-6ef8-42ff-b584-239cf88d7dd7] Pending
helpers_test.go:344: "busybox-mount" [f001beac-6ef8-42ff-b584-239cf88d7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f001beac-6ef8-42ff-b584-239cf88d7dd7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f001beac-6ef8-42ff-b584-239cf88d7dd7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.019716526s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-618885 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdany-port1247703334/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.85s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "284.557464ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "73.344324ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdspecific-port666639926/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.306164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdspecific-port666639926/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh "sudo umount -f /mount-9p": exit status 1 (284.430682ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-618885 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdspecific-port666639926/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 service list -o json
functional_test.go:1493: Took "513.733982ms" to run "out/minikube-linux-amd64 -p functional-618885 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdVerifyCleanup698121277/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdVerifyCleanup698121277/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdVerifyCleanup698121277/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T" /mount1: exit status 1 (311.548205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-618885 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdVerifyCleanup698121277/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdVerifyCleanup698121277/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-618885 /tmp/TestFunctionalparallelMountCmdVerifyCleanup698121277/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.221:32335
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.221:32335
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618885 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618885 image ls --format short --alsologtostderr:
I0131 02:18:14.174794 1428860 out.go:296] Setting OutFile to fd 1 ...
I0131 02:18:14.174929 1428860 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:14.174937 1428860 out.go:309] Setting ErrFile to fd 2...
I0131 02:18:14.174942 1428860 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:14.175138 1428860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
I0131 02:18:14.175712 1428860 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:14.175814 1428860 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:14.176213 1428860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:14.176258 1428860 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:14.191671 1428860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
I0131 02:18:14.192184 1428860 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:14.192828 1428860 main.go:141] libmachine: Using API Version  1
I0131 02:18:14.192857 1428860 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:14.193209 1428860 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:14.193394 1428860 main.go:141] libmachine: (functional-618885) Calling .GetState
I0131 02:18:14.195493 1428860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:14.195549 1428860 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:14.210201 1428860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
I0131 02:18:14.210692 1428860 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:14.211206 1428860 main.go:141] libmachine: Using API Version  1
I0131 02:18:14.211241 1428860 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:14.211544 1428860 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:14.211722 1428860 main.go:141] libmachine: (functional-618885) Calling .DriverName
I0131 02:18:14.211944 1428860 ssh_runner.go:195] Run: systemctl --version
I0131 02:18:14.212005 1428860 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
I0131 02:18:14.215215 1428860 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:14.215606 1428860 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
I0131 02:18:14.215645 1428860 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:14.215796 1428860 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
I0131 02:18:14.215989 1428860 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
I0131 02:18:14.216150 1428860 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
I0131 02:18:14.216302 1428860 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
I0131 02:18:14.358183 1428860 ssh_runner.go:195] Run: sudo crictl images --output json
I0131 02:18:14.445918 1428860 main.go:141] libmachine: Making call to close driver server
I0131 02:18:14.445936 1428860 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:14.446243 1428860 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:14.446270 1428860 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:14.446282 1428860 main.go:141] libmachine: Making call to close driver server
I0131 02:18:14.446281 1428860 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
I0131 02:18:14.446291 1428860 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:14.446549 1428860 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:14.446570 1428860 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:14.446587 1428860 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618885 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618885 image ls --format table --alsologtostderr:
I0131 02:18:15.221661 1428997 out.go:296] Setting OutFile to fd 1 ...
I0131 02:18:15.221971 1428997 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:15.221982 1428997 out.go:309] Setting ErrFile to fd 2...
I0131 02:18:15.221990 1428997 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:15.222239 1428997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
I0131 02:18:15.222949 1428997 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:15.223087 1428997 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:15.223580 1428997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:15.223651 1428997 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:15.239173 1428997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
I0131 02:18:15.239717 1428997 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:15.240374 1428997 main.go:141] libmachine: Using API Version  1
I0131 02:18:15.240421 1428997 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:15.240855 1428997 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:15.241078 1428997 main.go:141] libmachine: (functional-618885) Calling .GetState
I0131 02:18:15.243272 1428997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:15.243324 1428997 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:15.258714 1428997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45873
I0131 02:18:15.259107 1428997 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:15.259691 1428997 main.go:141] libmachine: Using API Version  1
I0131 02:18:15.259720 1428997 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:15.260103 1428997 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:15.260343 1428997 main.go:141] libmachine: (functional-618885) Calling .DriverName
I0131 02:18:15.260600 1428997 ssh_runner.go:195] Run: systemctl --version
I0131 02:18:15.260628 1428997 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
I0131 02:18:15.263800 1428997 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:15.264190 1428997 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
I0131 02:18:15.264222 1428997 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:15.264527 1428997 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
I0131 02:18:15.264728 1428997 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
I0131 02:18:15.264898 1428997 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
I0131 02:18:15.265244 1428997 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
I0131 02:18:15.372295 1428997 ssh_runner.go:195] Run: sudo crictl images --output json
I0131 02:18:15.431186 1428997 main.go:141] libmachine: Making call to close driver server
I0131 02:18:15.431213 1428997 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:15.431505 1428997 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:15.431526 1428997 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:15.431537 1428997 main.go:141] libmachine: Making call to close driver server
I0131 02:18:15.431545 1428997 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:15.431784 1428997 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:15.431800 1428997 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618885 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/
coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff
839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18d
b8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919
323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18e
b69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618885 image ls --format json --alsologtostderr:
I0131 02:18:14.947519 1428953 out.go:296] Setting OutFile to fd 1 ...
I0131 02:18:14.947672 1428953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:14.947686 1428953 out.go:309] Setting ErrFile to fd 2...
I0131 02:18:14.947693 1428953 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:14.948002 1428953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
I0131 02:18:14.948845 1428953 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:14.949014 1428953 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:14.949600 1428953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:14.949667 1428953 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:14.965375 1428953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
I0131 02:18:14.965912 1428953 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:14.966623 1428953 main.go:141] libmachine: Using API Version  1
I0131 02:18:14.966655 1428953 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:14.967038 1428953 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:14.967284 1428953 main.go:141] libmachine: (functional-618885) Calling .GetState
I0131 02:18:14.969210 1428953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:14.969250 1428953 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:14.984812 1428953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
I0131 02:18:14.985304 1428953 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:14.985799 1428953 main.go:141] libmachine: Using API Version  1
I0131 02:18:14.985822 1428953 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:14.986217 1428953 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:14.986433 1428953 main.go:141] libmachine: (functional-618885) Calling .DriverName
I0131 02:18:14.986626 1428953 ssh_runner.go:195] Run: systemctl --version
I0131 02:18:14.986664 1428953 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
I0131 02:18:14.989805 1428953 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:14.990202 1428953 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
I0131 02:18:14.990229 1428953 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:14.990543 1428953 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
I0131 02:18:14.990747 1428953 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
I0131 02:18:14.990888 1428953 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
I0131 02:18:14.991013 1428953 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
I0131 02:18:15.104347 1428953 ssh_runner.go:195] Run: sudo crictl images --output json
I0131 02:18:15.149240 1428953 main.go:141] libmachine: Making call to close driver server
I0131 02:18:15.149260 1428953 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:15.149609 1428953 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
I0131 02:18:15.149672 1428953 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:15.149685 1428953 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:15.149701 1428953 main.go:141] libmachine: Making call to close driver server
I0131 02:18:15.149713 1428953 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:15.149996 1428953 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:15.150018 1428953 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618885 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618885 image ls --format yaml --alsologtostderr:
I0131 02:18:14.514378 1428907 out.go:296] Setting OutFile to fd 1 ...
I0131 02:18:14.514576 1428907 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:14.514588 1428907 out.go:309] Setting ErrFile to fd 2...
I0131 02:18:14.514593 1428907 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:14.514803 1428907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
I0131 02:18:14.515413 1428907 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:14.515523 1428907 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:14.515937 1428907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:14.515994 1428907 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:14.531120 1428907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
I0131 02:18:14.531655 1428907 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:14.532327 1428907 main.go:141] libmachine: Using API Version  1
I0131 02:18:14.532359 1428907 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:14.532797 1428907 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:14.533012 1428907 main.go:141] libmachine: (functional-618885) Calling .GetState
I0131 02:18:14.535096 1428907 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:14.535150 1428907 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:14.550055 1428907 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
I0131 02:18:14.550593 1428907 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:14.551203 1428907 main.go:141] libmachine: Using API Version  1
I0131 02:18:14.551235 1428907 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:14.551608 1428907 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:14.551838 1428907 main.go:141] libmachine: (functional-618885) Calling .DriverName
I0131 02:18:14.552030 1428907 ssh_runner.go:195] Run: systemctl --version
I0131 02:18:14.552058 1428907 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
I0131 02:18:14.555101 1428907 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:14.555534 1428907 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
I0131 02:18:14.555570 1428907 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:14.555693 1428907 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
I0131 02:18:14.555902 1428907 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
I0131 02:18:14.556087 1428907 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
I0131 02:18:14.556240 1428907 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
I0131 02:18:14.669994 1428907 ssh_runner.go:195] Run: sudo crictl images --output json
I0131 02:18:14.726086 1428907 main.go:141] libmachine: Making call to close driver server
I0131 02:18:14.726105 1428907 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:14.726413 1428907 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:14.726442 1428907 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:14.726450 1428907 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
I0131 02:18:14.726453 1428907 main.go:141] libmachine: Making call to close driver server
I0131 02:18:14.726464 1428907 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:14.726726 1428907 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:14.726761 1428907 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:14.726761 1428907 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-618885 ssh pgrep buildkitd: exit status 1 (290.873461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image build -t localhost/my-image:functional-618885 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-618885 image build -t localhost/my-image:functional-618885 testdata/build --alsologtostderr: (4.216804199s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-618885 image build -t localhost/my-image:functional-618885 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 154a6befaac
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-618885
--> 5f22e79c7ef
Successfully tagged localhost/my-image:functional-618885
5f22e79c7ef70487e6f6ec6e44745c9b88baa2d0046eaf0cfe50d373a6db9c07
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-618885 image build -t localhost/my-image:functional-618885 testdata/build --alsologtostderr:
I0131 02:18:15.227430 1428998 out.go:296] Setting OutFile to fd 1 ...
I0131 02:18:15.227566 1428998 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:15.227577 1428998 out.go:309] Setting ErrFile to fd 2...
I0131 02:18:15.227581 1428998 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0131 02:18:15.227783 1428998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
I0131 02:18:15.228436 1428998 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:15.229151 1428998 config.go:182] Loaded profile config "functional-618885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0131 02:18:15.229789 1428998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:15.229848 1428998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:15.244875 1428998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
I0131 02:18:15.245310 1428998 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:15.245918 1428998 main.go:141] libmachine: Using API Version  1
I0131 02:18:15.245956 1428998 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:15.246398 1428998 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:15.246674 1428998 main.go:141] libmachine: (functional-618885) Calling .GetState
I0131 02:18:15.248604 1428998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0131 02:18:15.248653 1428998 main.go:141] libmachine: Launching plugin server for driver kvm2
I0131 02:18:15.264681 1428998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
I0131 02:18:15.265112 1428998 main.go:141] libmachine: () Calling .GetVersion
I0131 02:18:15.265691 1428998 main.go:141] libmachine: Using API Version  1
I0131 02:18:15.265716 1428998 main.go:141] libmachine: () Calling .SetConfigRaw
I0131 02:18:15.266176 1428998 main.go:141] libmachine: () Calling .GetMachineName
I0131 02:18:15.266537 1428998 main.go:141] libmachine: (functional-618885) Calling .DriverName
I0131 02:18:15.266793 1428998 ssh_runner.go:195] Run: systemctl --version
I0131 02:18:15.266827 1428998 main.go:141] libmachine: (functional-618885) Calling .GetSSHHostname
I0131 02:18:15.269437 1428998 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:15.269836 1428998 main.go:141] libmachine: (functional-618885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:59:f4", ip: ""} in network mk-functional-618885: {Iface:virbr1 ExpiryTime:2024-01-31 03:15:34 +0000 UTC Type:0 Mac:52:54:00:2c:59:f4 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:functional-618885 Clientid:01:52:54:00:2c:59:f4}
I0131 02:18:15.269873 1428998 main.go:141] libmachine: (functional-618885) DBG | domain functional-618885 has defined IP address 192.168.39.221 and MAC address 52:54:00:2c:59:f4 in network mk-functional-618885
I0131 02:18:15.270101 1428998 main.go:141] libmachine: (functional-618885) Calling .GetSSHPort
I0131 02:18:15.270285 1428998 main.go:141] libmachine: (functional-618885) Calling .GetSSHKeyPath
I0131 02:18:15.270456 1428998 main.go:141] libmachine: (functional-618885) Calling .GetSSHUsername
I0131 02:18:15.270608 1428998 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/functional-618885/id_rsa Username:docker}
I0131 02:18:15.409338 1428998 build_images.go:151] Building image from path: /tmp/build.2823389119.tar
I0131 02:18:15.409420 1428998 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0131 02:18:15.435061 1428998 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2823389119.tar
I0131 02:18:15.440498 1428998 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2823389119.tar: stat -c "%s %y" /var/lib/minikube/build/build.2823389119.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2823389119.tar': No such file or directory
I0131 02:18:15.440534 1428998 ssh_runner.go:362] scp /tmp/build.2823389119.tar --> /var/lib/minikube/build/build.2823389119.tar (3072 bytes)
I0131 02:18:15.480209 1428998 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2823389119
I0131 02:18:15.494040 1428998 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2823389119 -xf /var/lib/minikube/build/build.2823389119.tar
I0131 02:18:15.505412 1428998 crio.go:297] Building image: /var/lib/minikube/build/build.2823389119
I0131 02:18:15.505481 1428998 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-618885 /var/lib/minikube/build/build.2823389119 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0131 02:18:19.344499 1428998 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-618885 /var/lib/minikube/build/build.2823389119 --cgroup-manager=cgroupfs: (3.838988222s)
I0131 02:18:19.344571 1428998 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2823389119
I0131 02:18:19.358800 1428998 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2823389119.tar
I0131 02:18:19.369119 1428998 build_images.go:207] Built localhost/my-image:functional-618885 from /tmp/build.2823389119.tar
I0131 02:18:19.369163 1428998 build_images.go:123] succeeded building to: functional-618885
I0131 02:18:19.369167 1428998 build_images.go:124] failed building to: 
I0131 02:18:19.369195 1428998 main.go:141] libmachine: Making call to close driver server
I0131 02:18:19.369209 1428998 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:19.369554 1428998 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:19.369576 1428998 main.go:141] libmachine: Making call to close connection to plugin binary
I0131 02:18:19.369587 1428998 main.go:141] libmachine: Making call to close driver server
I0131 02:18:19.369595 1428998 main.go:141] libmachine: (functional-618885) Calling .Close
I0131 02:18:19.369836 1428998 main.go:141] libmachine: (functional-618885) DBG | Closing plugin on server side
I0131 02:18:19.369839 1428998 main.go:141] libmachine: Successfully made call to close driver server
I0131 02:18:19.369866 1428998 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.963209761s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-618885
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image rm gcr.io/google-containers/addon-resizer:functional-618885 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-618885 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-618885
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-618885
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-618885
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (92.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-757160 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0131 02:18:43.470636 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:48.591224 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:18:58.831822 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:19:19.312739 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:20:00.272999 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-757160 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m32.195061481s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (92.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons enable ingress --alsologtostderr -v=5: (16.421904687s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-757160 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-675574 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0131 02:23:29.472284 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:23:38.353011 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:24:06.034231 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:24:10.433185 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-675574 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.699978116s)
--- PASS: TestJSONOutput/start/Command (60.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-675574 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-675574 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-675574 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-675574 --output=json --user=testUser: (7.112631822s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-372000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-372000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.812985ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ffde4d06-09b4-4ff8-9221-43400b401c75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-372000] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"025f8e79-43ec-4fe5-866b-c064d3709f1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18051"}}
	{"specversion":"1.0","id":"0b6622e9-5aa0-467f-b558-b3bd124715f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bad84391-5a98-47ae-a560-7f684f323199","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig"}}
	{"specversion":"1.0","id":"ef28ed41-db65-4b91-b33d-324216d97432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube"}}
	{"specversion":"1.0","id":"fd6375cc-d0f1-4d20-b74f-c873c79280af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9a835fe7-5810-4fd7-a4c2-3e34ef618db8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1c042be4-59f8-4d0f-a383-eb79793bed6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-372000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-372000
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-207723 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-207723 --driver=kvm2  --container-runtime=crio: (45.220159437s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-210890 --driver=kvm2  --container-runtime=crio
E0131 02:25:30.924453 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:30.929843 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:30.940184 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:30.960511 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:31.000840 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:31.081172 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:31.241637 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:31.562302 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:32.203274 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:32.353566 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:25:33.483542 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:36.044378 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:41.165125 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:25:51.406053 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-210890 --driver=kvm2  --container-runtime=crio: (46.000216648s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-207723
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-210890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-210890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-210890
helpers_test.go:175: Cleaning up "first-207723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-207723
--- PASS: TestMinikubeProfile (93.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-769598 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0131 02:26:11.886960 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-769598 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.357066975s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-769598 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-769598 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-788333 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0131 02:26:52.847748 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-788333 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.139179531s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788333 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788333 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-769598 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788333 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788333 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-788333
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-788333: (1.214700429s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-788333
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-788333: (22.738860682s)
--- PASS: TestMountStart/serial/RestartStopped (23.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788333 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-788333 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (152.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0131 02:27:48.510108 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:28:14.771399 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:28:16.193833 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:28:38.351426 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m32.16131653s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (152.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-263108 -- rollout status deployment/busybox: (4.054371105s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-9xlwh -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-dlpzg -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-9xlwh -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-dlpzg -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-9xlwh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-dlpzg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-9xlwh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-9xlwh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-dlpzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263108 -- exec busybox-5b5d89c9d6-dlpzg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-263108 -v 3 --alsologtostderr
E0131 02:30:30.923811 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-263108 -v 3 --alsologtostderr: (41.912786388s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-263108 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp testdata/cp-test.txt multinode-263108:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2294290134/001/cp-test_multinode-263108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108:/home/docker/cp-test.txt multinode-263108-m02:/home/docker/cp-test_multinode-263108_multinode-263108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m02 "sudo cat /home/docker/cp-test_multinode-263108_multinode-263108-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108:/home/docker/cp-test.txt multinode-263108-m03:/home/docker/cp-test_multinode-263108_multinode-263108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108 "sudo cat /home/docker/cp-test.txt"
E0131 02:30:58.612535 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m03 "sudo cat /home/docker/cp-test_multinode-263108_multinode-263108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp testdata/cp-test.txt multinode-263108-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2294290134/001/cp-test_multinode-263108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108-m02:/home/docker/cp-test.txt multinode-263108:/home/docker/cp-test_multinode-263108-m02_multinode-263108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108 "sudo cat /home/docker/cp-test_multinode-263108-m02_multinode-263108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108-m02:/home/docker/cp-test.txt multinode-263108-m03:/home/docker/cp-test_multinode-263108-m02_multinode-263108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m03 "sudo cat /home/docker/cp-test_multinode-263108-m02_multinode-263108-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp testdata/cp-test.txt multinode-263108-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2294290134/001/cp-test_multinode-263108-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt multinode-263108:/home/docker/cp-test_multinode-263108-m03_multinode-263108.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108 "sudo cat /home/docker/cp-test_multinode-263108-m03_multinode-263108.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 cp multinode-263108-m03:/home/docker/cp-test.txt multinode-263108-m02:/home/docker/cp-test_multinode-263108-m03_multinode-263108-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 ssh -n multinode-263108-m02 "sudo cat /home/docker/cp-test_multinode-263108-m03_multinode-263108-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-263108 node stop m03: (1.322314929s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263108 status: exit status 7 (458.109775ms)

                                                
                                                
-- stdout --
	multinode-263108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-263108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-263108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263108 status --alsologtostderr: exit status 7 (461.12322ms)

                                                
                                                
-- stdout --
	multinode-263108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-263108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-263108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 02:31:05.683618 1435973 out.go:296] Setting OutFile to fd 1 ...
	I0131 02:31:05.683771 1435973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:31:05.683782 1435973 out.go:309] Setting ErrFile to fd 2...
	I0131 02:31:05.683787 1435973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 02:31:05.684004 1435973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 02:31:05.684222 1435973 out.go:303] Setting JSON to false
	I0131 02:31:05.684255 1435973 mustload.go:65] Loading cluster: multinode-263108
	I0131 02:31:05.684402 1435973 notify.go:220] Checking for updates...
	I0131 02:31:05.684788 1435973 config.go:182] Loaded profile config "multinode-263108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 02:31:05.684806 1435973 status.go:255] checking status of multinode-263108 ...
	I0131 02:31:05.685292 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:05.685383 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:05.713288 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35095
	I0131 02:31:05.713773 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:05.714456 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:05.714506 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:05.714842 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:05.715065 1435973 main.go:141] libmachine: (multinode-263108) Calling .GetState
	I0131 02:31:05.716614 1435973 status.go:330] multinode-263108 host status = "Running" (err=<nil>)
	I0131 02:31:05.716634 1435973 host.go:66] Checking if "multinode-263108" exists ...
	I0131 02:31:05.716922 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:05.716960 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:05.732808 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0131 02:31:05.733186 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:05.733581 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:05.733601 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:05.733962 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:05.734182 1435973 main.go:141] libmachine: (multinode-263108) Calling .GetIP
	I0131 02:31:05.737022 1435973 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:31:05.737451 1435973 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:31:05.737481 1435973 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:31:05.737621 1435973 host.go:66] Checking if "multinode-263108" exists ...
	I0131 02:31:05.737904 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:05.737930 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:05.753021 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0131 02:31:05.753442 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:05.753851 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:05.753873 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:05.754176 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:05.754368 1435973 main.go:141] libmachine: (multinode-263108) Calling .DriverName
	I0131 02:31:05.754579 1435973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0131 02:31:05.754608 1435973 main.go:141] libmachine: (multinode-263108) Calling .GetSSHHostname
	I0131 02:31:05.757079 1435973 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:31:05.757539 1435973 main.go:141] libmachine: (multinode-263108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:a7:c9", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:27:48 +0000 UTC Type:0 Mac:52:54:00:35:a7:c9 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:multinode-263108 Clientid:01:52:54:00:35:a7:c9}
	I0131 02:31:05.757565 1435973 main.go:141] libmachine: (multinode-263108) DBG | domain multinode-263108 has defined IP address 192.168.39.109 and MAC address 52:54:00:35:a7:c9 in network mk-multinode-263108
	I0131 02:31:05.757623 1435973 main.go:141] libmachine: (multinode-263108) Calling .GetSSHPort
	I0131 02:31:05.757806 1435973 main.go:141] libmachine: (multinode-263108) Calling .GetSSHKeyPath
	I0131 02:31:05.757981 1435973 main.go:141] libmachine: (multinode-263108) Calling .GetSSHUsername
	I0131 02:31:05.758127 1435973 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108/id_rsa Username:docker}
	I0131 02:31:05.845708 1435973 ssh_runner.go:195] Run: systemctl --version
	I0131 02:31:05.850942 1435973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:31:05.863079 1435973 kubeconfig.go:92] found "multinode-263108" server: "https://192.168.39.109:8443"
	I0131 02:31:05.863112 1435973 api_server.go:166] Checking apiserver status ...
	I0131 02:31:05.863144 1435973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0131 02:31:05.875365 1435973 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1066/cgroup
	I0131 02:31:05.885614 1435973 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podd670ff05d0032fcc9ae24f8fc09df250/crio-07cc2d40ffc76c5dd4c8492dcaa99b3b449088540dd6c5bc2f1e405edaa59003"
	I0131 02:31:05.885705 1435973 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd670ff05d0032fcc9ae24f8fc09df250/crio-07cc2d40ffc76c5dd4c8492dcaa99b3b449088540dd6c5bc2f1e405edaa59003/freezer.state
	I0131 02:31:05.894355 1435973 api_server.go:204] freezer state: "THAWED"
	I0131 02:31:05.894384 1435973 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8443/healthz ...
	I0131 02:31:05.899057 1435973 api_server.go:279] https://192.168.39.109:8443/healthz returned 200:
	ok
	I0131 02:31:05.899081 1435973 status.go:421] multinode-263108 apiserver status = Running (err=<nil>)
	I0131 02:31:05.899091 1435973 status.go:257] multinode-263108 status: &{Name:multinode-263108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0131 02:31:05.899108 1435973 status.go:255] checking status of multinode-263108-m02 ...
	I0131 02:31:05.899396 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:05.899431 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:05.916997 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0131 02:31:05.917446 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:05.917979 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:05.918010 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:05.918347 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:05.918591 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .GetState
	I0131 02:31:05.920158 1435973 status.go:330] multinode-263108-m02 host status = "Running" (err=<nil>)
	I0131 02:31:05.920195 1435973 host.go:66] Checking if "multinode-263108-m02" exists ...
	I0131 02:31:05.920472 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:05.920511 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:05.935332 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0131 02:31:05.935761 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:05.936183 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:05.936205 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:05.936553 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:05.936753 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .GetIP
	I0131 02:31:05.939460 1435973 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:31:05.939896 1435973 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:31:05.939928 1435973 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:31:05.940047 1435973 host.go:66] Checking if "multinode-263108-m02" exists ...
	I0131 02:31:05.940392 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:05.940436 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:05.955594 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0131 02:31:05.956013 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:05.956423 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:05.956443 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:05.956758 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:05.956914 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .DriverName
	I0131 02:31:05.957096 1435973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0131 02:31:05.957118 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHHostname
	I0131 02:31:05.960009 1435973 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:31:05.960477 1435973 main.go:141] libmachine: (multinode-263108-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:10:c7", ip: ""} in network mk-multinode-263108: {Iface:virbr1 ExpiryTime:2024-01-31 03:28:53 +0000 UTC Type:0 Mac:52:54:00:8c:10:c7 Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-263108-m02 Clientid:01:52:54:00:8c:10:c7}
	I0131 02:31:05.960513 1435973 main.go:141] libmachine: (multinode-263108-m02) DBG | domain multinode-263108-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8c:10:c7 in network mk-multinode-263108
	I0131 02:31:05.960645 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHPort
	I0131 02:31:05.960809 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHKeyPath
	I0131 02:31:05.960973 1435973 main.go:141] libmachine: (multinode-263108-m02) Calling .GetSSHUsername
	I0131 02:31:05.961074 1435973 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18051-1412717/.minikube/machines/multinode-263108-m02/id_rsa Username:docker}
	I0131 02:31:06.049532 1435973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0131 02:31:06.063668 1435973 status.go:257] multinode-263108-m02 status: &{Name:multinode-263108-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0131 02:31:06.063713 1435973 status.go:255] checking status of multinode-263108-m03 ...
	I0131 02:31:06.064055 1435973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0131 02:31:06.064113 1435973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0131 02:31:06.081022 1435973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I0131 02:31:06.081473 1435973 main.go:141] libmachine: () Calling .GetVersion
	I0131 02:31:06.081955 1435973 main.go:141] libmachine: Using API Version  1
	I0131 02:31:06.081980 1435973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0131 02:31:06.082301 1435973 main.go:141] libmachine: () Calling .GetMachineName
	I0131 02:31:06.082494 1435973 main.go:141] libmachine: (multinode-263108-m03) Calling .GetState
	I0131 02:31:06.084099 1435973 status.go:330] multinode-263108-m03 host status = "Stopped" (err=<nil>)
	I0131 02:31:06.084113 1435973 status.go:343] host is not running, skipping remaining checks
	I0131 02:31:06.084119 1435973 status.go:257] multinode-263108-m03 status: &{Name:multinode-263108-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-263108 node start m03 --alsologtostderr: (28.777636517s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-263108 node delete m03: (1.229637304s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (444.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263108 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0131 02:45:30.923435 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:47:48.510882 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 02:48:38.351613 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:50:30.923460 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 02:51:41.395123 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 02:52:48.510419 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263108 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m24.195135474s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263108 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (444.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-263108
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263108-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-263108-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (83.453673ms)

                                                
                                                
-- stdout --
	* [multinode-263108-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-263108-m02' is duplicated with machine name 'multinode-263108-m02' in profile 'multinode-263108'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263108-m03 --driver=kvm2  --container-runtime=crio
E0131 02:53:38.351104 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263108-m03 --driver=kvm2  --container-runtime=crio: (47.378858089s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-263108
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-263108: exit status 80 (252.555143ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-263108
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-263108-m03 already exists in multinode-263108-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-263108-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.59s)

                                                
                                    
x
+
TestScheduledStopUnix (118.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-673078 --memory=2048 --driver=kvm2  --container-runtime=crio
E0131 02:57:48.510816 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-673078 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.659090967s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673078 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-673078 -n scheduled-stop-673078
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673078 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673078 --cancel-scheduled
E0131 02:58:33.976450 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-673078 -n scheduled-stop-673078
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-673078
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673078 --schedule 15s
E0131 02:58:38.352657 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-673078
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-673078: exit status 7 (87.392408ms)

                                                
                                                
-- stdout --
	scheduled-stop-673078
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-673078 -n scheduled-stop-673078
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-673078 -n scheduled-stop-673078: exit status 7 (87.465417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-673078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-673078
--- PASS: TestScheduledStopUnix (118.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (223.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3788903225 start -p running-upgrade-331640 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3788903225 start -p running-upgrade-331640 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.868886435s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-331640 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-331640 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.671411259s)
helpers_test.go:175: Cleaning up "running-upgrade-331640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-331640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-331640: (1.474351512s)
--- PASS: TestRunningBinaryUpgrade (223.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (248.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0131 03:00:30.923623 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m43.018965225s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-278852
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-278852: (2.498630343s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-278852 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-278852 status --format={{.Host}}: exit status 7 (111.835551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.397572019s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-278852 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (130.19017ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-278852] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-278852
	    minikube start -p kubernetes-upgrade-278852 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2788522 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-278852 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-278852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.361048988s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-278852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-278852
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-278852: (1.093107385s)
--- PASS: TestKubernetesUpgrade (248.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-317821 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-317821 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (114.922867ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-317821] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-317821 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-317821 --driver=kvm2  --container-runtime=crio: (1m40.193766263s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-317821 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-390748 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-390748 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.93314ms)

                                                
                                                
-- stdout --
	* [false-390748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18051
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0131 03:00:11.710687 1444595 out.go:296] Setting OutFile to fd 1 ...
	I0131 03:00:11.711003 1444595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:00:11.711014 1444595 out.go:309] Setting ErrFile to fd 2...
	I0131 03:00:11.711019 1444595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0131 03:00:11.711224 1444595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18051-1412717/.minikube/bin
	I0131 03:00:11.711838 1444595 out.go:303] Setting JSON to false
	I0131 03:00:11.712963 1444595 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":27755,"bootTime":1706642257,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0131 03:00:11.713037 1444595 start.go:138] virtualization: kvm guest
	I0131 03:00:11.715423 1444595 out.go:177] * [false-390748] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0131 03:00:11.716937 1444595 out.go:177]   - MINIKUBE_LOCATION=18051
	I0131 03:00:11.718409 1444595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0131 03:00:11.716947 1444595 notify.go:220] Checking for updates...
	I0131 03:00:11.720870 1444595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18051-1412717/kubeconfig
	I0131 03:00:11.722157 1444595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18051-1412717/.minikube
	I0131 03:00:11.723461 1444595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0131 03:00:11.724739 1444595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0131 03:00:11.726472 1444595 config.go:182] Loaded profile config "NoKubernetes-317821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:00:11.726621 1444595 config.go:182] Loaded profile config "offline-crio-331647": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0131 03:00:11.726736 1444595 config.go:182] Loaded profile config "running-upgrade-331640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0131 03:00:11.726850 1444595 driver.go:392] Setting default libvirt URI to qemu:///system
	I0131 03:00:11.762143 1444595 out.go:177] * Using the kvm2 driver based on user configuration
	I0131 03:00:11.763547 1444595 start.go:298] selected driver: kvm2
	I0131 03:00:11.763559 1444595 start.go:902] validating driver "kvm2" against <nil>
	I0131 03:00:11.763574 1444595 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0131 03:00:11.765505 1444595 out.go:177] 
	W0131 03:00:11.766819 1444595 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0131 03:00:11.768148 1444595 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-390748 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-390748" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-390748

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-390748"

                                                
                                                
----------------------- debugLogs end: false-390748 [took: 3.370106657s] --------------------------------
helpers_test.go:175: Cleaning up "false-390748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-390748
--- PASS: TestNetworkPlugins/group/false (3.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (160.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.975008983 start -p stopped-upgrade-609081 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.975008983 start -p stopped-upgrade-609081 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m27.228143837s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.975008983 -p stopped-upgrade-609081 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.975008983 -p stopped-upgrade-609081 stop: (2.128795078s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-609081 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-609081 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.622794627s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (160.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-317821 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-317821 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m3.283224436s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-317821 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-317821 status -o json: exit status 2 (305.61273ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-317821","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-317821
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-317821: (1.200760665s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (64.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-317821 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-317821 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.864438351s)
--- PASS: TestNoKubernetes/serial/Start (28.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-317821 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-317821 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.846833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (22.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0131 03:02:48.510556 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.819117495s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.680337927s)
--- PASS: TestNoKubernetes/serial/ProfileList (22.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-317821
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-317821: (1.260029345s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-317821 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-317821 --driver=kvm2  --container-runtime=crio: (40.13957772s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.14s)

                                                
                                    
x
+
TestPause/serial/Start (99.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-218490 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-218490 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m39.958141831s)
--- PASS: TestPause/serial/Start (99.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-609081
E0131 03:03:38.351569 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-317821 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-317821 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.969512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0131 03:05:30.923499 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.228296267s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m21.320738505s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (125.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m5.91699929s)
--- PASS: TestNetworkPlugins/group/calico/Start (125.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-390748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h89bb" [de0095cc-d8d7-45c3-b7a2-35800842e5d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h89bb" [de0095cc-d8d7-45c3-b7a2-35800842e5d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004360194s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.265750474s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6t77j" [75cd118b-1814-4cce-b4bc-b6afdb2887ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005800614s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-390748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mrrh8" [ef8fa5ca-0e24-45a8-a87c-e560b0b3ead3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mrrh8" [ef8fa5ca-0e24-45a8-a87c-e560b0b3ead3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005265223s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m10.001169186s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bqlvc" [7c92b90f-e8d4-41a7-b514-746ab577a13c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008732502s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-390748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5czqh" [2ec062e5-cc14-4193-9f77-448b90e1158e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0131 03:08:21.395890 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5czqh" [2ec062e5-cc14-4193-9f77-448b90e1158e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004279648s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-390748 "pgrep -a kubelet"
E0131 03:08:38.351482 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7m2r4" [17f0b2cc-18f7-4183-9677-e201f50beb18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7m2r4" [17f0b2cc-18f7-4183-9677-e201f50beb18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003854577s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.789194146s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-390748 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m33.557221717s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-390748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mw2ch" [dd1972f5-7f83-4c2f-9970-3b986eaeb4d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mw2ch" [dd1972f5-7f83-4c2f-9970-3b986eaeb4d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005628211s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-711547 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-711547 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m34.189950223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (123.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-625812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-625812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m3.22028361s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (123.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gbtn7" [d97aecdc-bd51-44fe-9e90-cab978bcbdbf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005707514s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-390748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z9bqc" [359cc6e0-e2ba-489a-9a74-cb1037db31c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z9bqc" [359cc6e0-e2ba-489a-9a74-cb1037db31c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005726873s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-390748 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-390748 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s6t8p" [73ed20a8-7d64-42b9-a683-94b7febb582e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s6t8p" [73ed20a8-7d64-42b9-a683-94b7febb582e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005837993s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0131 03:10:30.923739 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-390748 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-390748 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-873005 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-873005 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m7.242618405s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (83.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-229073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-229073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m23.22713178s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (83.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-625812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c86d744d-8d59-4763-b8a4-33d319659ed1] Pending
helpers_test.go:344: "busybox" [c86d744d-8d59-4763-b8a4-33d319659ed1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c86d744d-8d59-4763-b8a4-33d319659ed1] Running
E0131 03:11:41.531193 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:41.536553 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:41.546852 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:41.567203 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:41.607585 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:41.687912 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:41.848713 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:11:42.169448 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00429399s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-625812 exec busybox -- /bin/sh -c "ulimit -n"
E0131 03:11:42.810353 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-625812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-625812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-711547 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cdea59c2-f772-4540-bcfd-66c1429612f1] Pending
helpers_test.go:344: "busybox" [cdea59c2-f772-4540-bcfd-66c1429612f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0131 03:11:46.652254 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cdea59c2-f772-4540-bcfd-66c1429612f1] Running
E0131 03:11:51.773526 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004059592s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-711547 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-711547 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-711547 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [247e337a-2b3b-4e78-b63a-d9826c8a717d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [247e337a-2b3b-4e78-b63a-d9826c8a717d] Running
E0131 03:12:02.014620 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004511133s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-873005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-873005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0837665s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-873005 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-229073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-229073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.352333554s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-229073 --alsologtostderr -v=3
E0131 03:12:22.489241 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:22.495496 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-229073 --alsologtostderr -v=3: (3.115238769s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229073 -n newest-cni-229073
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229073 -n newest-cni-229073: exit status 7 (89.249913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-229073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-229073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0131 03:12:31.555882 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 03:12:32.729843 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:12:48.510210 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 03:12:53.210161 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:13:03.456520 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:13:10.634042 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:10.639387 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:10.649820 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:10.670247 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:10.710545 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:10.791025 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:10.951499 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:11.272323 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:11.912751 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-229073 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (48.247523538s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-229073 -n newest-cni-229073
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-229073 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-229073 --alsologtostderr -v=1
E0131 03:13:13.193549 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229073 -n newest-cni-229073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229073 -n newest-cni-229073: exit status 2 (277.031041ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229073 -n newest-cni-229073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229073 -n newest-cni-229073: exit status 2 (284.136981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-229073 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-229073 -n newest-cni-229073
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-229073 -n newest-cni-229073
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-958254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0131 03:13:20.874962 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:31.115790 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:13:34.170634 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:13:38.351613 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:13:38.886458 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:38.891804 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:38.902122 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:38.922432 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:38.962820 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:39.043254 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:39.203725 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:39.524075 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:40.165241 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:41.445608 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:13:44.006397 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-958254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m3.773872917s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (695.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-625812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-625812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m34.839873001s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-625812 -n no-preload-625812
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (695.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-958254 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [054e4e44-79c5-47a9-a70c-0d73f32c1666] Pending
E0131 03:14:20.997968 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
helpers_test.go:344: "busybox" [054e4e44-79c5-47a9-a70c-0d73f32c1666] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [054e4e44-79c5-47a9-a70c-0d73f32c1666] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004720716s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-958254 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (710.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-711547 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-711547 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m50.105890236s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-711547 -n old-k8s-version-711547
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (710.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-958254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-958254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023024817s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-958254 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (868.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-873005 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0131 03:14:41.478663 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:14:56.091935 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:15:00.809434 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:15:12.029546 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.034864 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.045194 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.065525 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.105890 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.186327 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.346737 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:12.667530 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:13.307936 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:13.976750 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 03:15:14.588709 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:17.149278 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:22.270522 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:22.439859 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:15:25.141867 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.147224 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.157519 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.177798 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.218183 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.298621 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.459204 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:25.779704 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:26.420155 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:27.700517 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:30.261325 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:30.923838 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 03:15:32.510743 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:35.381784 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:45.622422 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:15:52.990982 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:15:54.478377 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:16:06.102928 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:16:22.729661 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-873005 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m28.151482687s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-873005 -n default-k8s-diff-port-873005
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (868.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (744.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-958254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0131 03:17:09.218041 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:17:12.249422 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:17:39.932284 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:17:48.510108 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 03:17:55.872705 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:18:08.984774 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:18:10.633951 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:18:38.319603 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:18:38.350958 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:18:38.885941 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:19:00.516246 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:19:06.570697 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:19:28.202605 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:20:12.029345 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:20:25.141331 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:20:30.924328 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
E0131 03:20:39.713111 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:20:52.825644 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:21:41.531246 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/auto-390748/client.crt: no such file or directory
E0131 03:22:12.249669 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/kindnet-390748/client.crt: no such file or directory
E0131 03:22:48.510544 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/functional-618885/client.crt: no such file or directory
E0131 03:23:10.634174 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/calico-390748/client.crt: no such file or directory
E0131 03:23:38.351225 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:23:38.886809 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/custom-flannel-390748/client.crt: no such file or directory
E0131 03:24:00.516379 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/enable-default-cni-390748/client.crt: no such file or directory
E0131 03:25:01.396837 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/addons-165032/client.crt: no such file or directory
E0131 03:25:12.029191 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/flannel-390748/client.crt: no such file or directory
E0131 03:25:25.142120 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/bridge-390748/client.crt: no such file or directory
E0131 03:25:30.923628 1419976 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18051-1412717/.minikube/profiles/ingress-addon-legacy-757160/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-958254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (12m24.286154913s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-958254 -n embed-certs-958254
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (744.59s)

                                                
                                    

Test skip (39/304)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
248 TestNetworkPlugins/group/kubenet 3.33
256 TestNetworkPlugins/group/cilium 6.62
262 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-390748 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-390748" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-390748

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-390748"

                                                
                                                
----------------------- debugLogs end: kubenet-390748 [took: 3.18062941s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-390748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-390748
--- SKIP: TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-390748 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-390748" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-390748

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-390748" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-390748"

                                                
                                                
----------------------- debugLogs end: cilium-390748 [took: 6.465147474s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-390748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-390748
--- SKIP: TestNetworkPlugins/group/cilium (6.62s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-096443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-096443
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard